Downloads & Free Reading Options - Results

Semi Supervised Learning by Olivier Chapelle

Read "Semi Supervised Learning" by Olivier Chapelle through these free online access and download options.

Search for Downloads

Search by Title or Author

Books Results

Source: The Internet Archive

The internet Archive Search Results

Available books for downloads and borrow from The internet Archive

1Semi-described And Semi-supervised Learning With Gaussian Processes

By

Propagating input uncertainty through non-linear Gaussian process (GP) mappings is intractable. This hinders the task of training GPs using uncertain and partially observed inputs. In this paper we refer to this task as "semi-described learning". We then introduce a GP framework that solves both, the semi-described and the semi-supervised learning problems (where missing values occur in the outputs). Auto-regressive state space simulation is also recognised as a special case of semi-described learning. To achieve our goal we develop variational methods for handling semi-described inputs in GPs, and couple them with algorithms that allow for imputing the missing values while treating the uncertainty in a principled, Bayesian manner. Extensive experiments on simulated and real-world data study the problems of iterative forecasting and regression/classification with missing values. The results suggest that the principled propagation of uncertainty stemming from our framework can significantly improve performance in these tasks.

“Semi-described And Semi-supervised Learning With Gaussian Processes” Metadata:

  • Title: ➤  Semi-described And Semi-supervised Learning With Gaussian Processes
  • Authors:
  • Language: English

“Semi-described And Semi-supervised Learning With Gaussian Processes” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 13.90 Mbs, the file-s for this book were downloaded 42 times, the file-s went public at Thu Jun 28 2018.

Available formats:
Abbyy GZ - Archive BitTorrent - DjVuTXT - Djvu XML - JPEG Thumb - Metadata - Scandata - Single Page Processed JP2 ZIP - Text PDF -

Related Links:

Online Marketplaces

Find Semi-described And Semi-supervised Learning With Gaussian Processes at online marketplaces:


2Empirical Stationary Correlations For Semi-supervised Learning On Graphs

By

In semi-supervised learning on graphs, response variables observed at one node are used to estimate missing values at other nodes. The methods exploit correlations between nearby nodes in the graph. In this paper we prove that many such proposals are equivalent to kriging predictors based on a fixed covariance matrix driven by the link structure of the graph. We then propose a data-driven estimator of the correlation structure that exploits patterns among the observed response values. By incorporating even a small fraction of observed covariation into the predictions, we are able to obtain much improved prediction on two graph data sets.

“Empirical Stationary Correlations For Semi-supervised Learning On Graphs” Metadata:

  • Title: ➤  Empirical Stationary Correlations For Semi-supervised Learning On Graphs
  • Authors:
  • Language: English

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 13.12 Mbs, the file-s for this book were downloaded 72 times, the file-s went public at Sat Sep 21 2013.

Available formats:
Abbyy GZ - Animated GIF - Archive BitTorrent - DjVu - DjVuTXT - Djvu XML - Item Tile - Metadata - Scandata - Single Page Processed JP2 ZIP - Text PDF -

Related Links:

Online Marketplaces

Find Empirical Stationary Correlations For Semi-supervised Learning On Graphs at online marketplaces:


3Local Nearest Neighbour Classification With Applications To Semi-supervised Learning

By

We derive a new asymptotic expansion for the global excess risk of a local $k$-nearest neighbour classifier, where the choice of $k$ may depend upon the test point. This expansion elucidates conditions under which the dominant contribution to the excess risk comes from the locus of points at which each class label is equally likely to occur, but we also show that if these conditions are not satisfied, the dominant contribution may arise from the tails of the marginal distribution of the features. Moreover, we prove that, provided the $d$-dimensional marginal distribution of the features has a finite $\rho$th moment for some $\rho > 4$ (as well as other regularity conditions), a local choice of $k$ can yield a rate of convergence of the excess risk of $O(n^{-4/(d+4)})$, where $n$ is the sample size, whereas for the standard $k$-nearest neighbour classifier, our theory would require $d \geq 5$ and $\rho > 4d/(d-4)$ finite moments to achieve this rate. Our results motivate a new $k$-nearest neighbour classifier for semi-supervised learning problems, where the unlabelled data are used to obtain an estimate of the marginal feature density, and fewer neighbours are used for classification when this density estimate is small. The potential improvements over the standard $k$-nearest neighbour classifier are illustrated both through our theory and via a simulation study.

“Local Nearest Neighbour Classification With Applications To Semi-supervised Learning” Metadata:

  • Title: ➤  Local Nearest Neighbour Classification With Applications To Semi-supervised Learning
  • Authors:

“Local Nearest Neighbour Classification With Applications To Semi-supervised Learning” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.42 Mbs, the file-s for this book were downloaded 18 times, the file-s went public at Sat Jun 30 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Local Nearest Neighbour Classification With Applications To Semi-supervised Learning at online marketplaces:


4Convex Formulation For Kernel PCA And Its Use In Semi-Supervised Learning

By

In this paper, Kernel PCA is reinterpreted as the solution to a convex optimization problem. Actually, there is a constrained convex problem for each principal component, so that the constraints guarantee that the principal component is indeed a solution, and not a mere saddle point. Although these insights do not imply any algorithmic improvement, they can be used to further understand the method, formulate possible extensions and properly address them. As an example, a new convex optimization problem for semi-supervised classification is proposed, which seems particularly well-suited whenever the number of known labels is small. Our formulation resembles a Least Squares SVM problem with a regularization parameter multiplied by a negative sign, combined with a variational principle for Kernel PCA. Our primal optimization principle for semi-supervised learning is solved in terms of the Lagrange multipliers. Numerical experiments in several classification tasks illustrate the performance of the proposed model in problems with only a few labeled data.

“Convex Formulation For Kernel PCA And Its Use In Semi-Supervised Learning” Metadata:

  • Title: ➤  Convex Formulation For Kernel PCA And Its Use In Semi-Supervised Learning
  • Authors:

“Convex Formulation For Kernel PCA And Its Use In Semi-Supervised Learning” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.52 Mbs, the file-s for this book were downloaded 23 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Convex Formulation For Kernel PCA And Its Use In Semi-Supervised Learning at online marketplaces:


5Max-Margin Deep Generative Models For (Semi-)Supervised Learning

By

Deep generative models (DGMs) are effective on learning multilayered representations of complex data and performing inference of input data by exploring the generative ability. However, it is relatively insufficient to empower the discriminative ability of DGMs on making accurate predictions. This paper presents max-margin deep generative models (mmDGMs) and a class-conditional variant (mmDCGMs), which explore the strongly discriminative principle of max-margin learning to improve the predictive performance of DGMs in both supervised and semi-supervised learning, while retaining the generative capability. In semi-supervised learning, we use the predictions of a max-margin classifier as the missing labels instead of performing full posterior inference for efficiency; we also introduce additional max-margin and label-balance regularization terms of unlabeled data for effectiveness. We develop an efficient doubly stochastic subgradient algorithm for the piecewise linear objectives in different settings. Empirical results on various datasets demonstrate that: (1) max-margin learning can significantly improve the prediction performance of DGMs and meanwhile retain the generative ability; (2) in supervised learning, mmDGMs are competitive to the best fully discriminative networks when employing convolutional neural networks as the generative and recognition models; and (3) in semi-supervised learning, mmDCGMs can perform efficient inference and achieve state-of-the-art classification results on several benchmarks.

“Max-Margin Deep Generative Models For (Semi-)Supervised Learning” Metadata:

  • Title: ➤  Max-Margin Deep Generative Models For (Semi-)Supervised Learning
  • Authors:

“Max-Margin Deep Generative Models For (Semi-)Supervised Learning” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 3.97 Mbs, the file-s for this book were downloaded 20 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Max-Margin Deep Generative Models For (Semi-)Supervised Learning at online marketplaces:


6Semi-Supervised Learning -- A Statistical Physics Approach

By

We present a novel approach to semi-supervised learning which is based on statistical physics. Most of the former work in the field of semi-supervised learning classifies the points by minimizing a certain energy function, which corresponds to a minimal k-way cut solution. In contrast to these methods, we estimate the distribution of classifications, instead of the sole minimal k-way cut, which yields more accurate and robust results. Our approach may be applied to all energy functions used for semi-supervised learning. The method is based on sampling using a Multicanonical Markov chain Monte-Carlo algorithm, and has a straightforward probabilistic interpretation, which allows for soft assignments of points to classes, and also to cope with yet unseen class types. The suggested approach is demonstrated on a toy data set and on two real-life data sets of gene expression.

“Semi-Supervised Learning -- A Statistical Physics Approach” Metadata:

  • Title: ➤  Semi-Supervised Learning -- A Statistical Physics Approach
  • Authors:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 7.62 Mbs, the file-s for this book were downloaded 78 times, the file-s went public at Sat Jul 20 2013.

Available formats:
Abbyy GZ - Animated GIF - Archive BitTorrent - DjVu - DjVuTXT - Djvu XML - Item Tile - Metadata - Scandata - Single Page Processed JP2 ZIP - Text PDF -

Related Links:

Online Marketplaces

Find Semi-Supervised Learning -- A Statistical Physics Approach at online marketplaces:


7Crop Classification Using Semi Supervised Learning On Data Fusion Of SAR And Optical Sensor

By

Crop maps are essential tools for creating crop inventories, forecasting yields, and guiding the use of efficient farm management techniques. These maps must be created at highly exact scales, necessitating difficult, costly, and time-consuming fieldwork. Deep learning algorithms have now significantly enhanced outcomes when using data in the geographical and temporal dimensions, which are essential for agricultural research. The simultaneous availability of Sentinel-1 (synthetic aperture radar) and Sentinel-2 (optical) data provides an excellent chance to combine them. Sentinel 1 and Sentinel 2 data sets were collected for the Cape Town, South Africa, region. With the use of these datasets, we use the fusion technique, particularly the layer-level fusion strategy, one of the three fusion procedures (input level, layer level, and decision level). Also, we will compare the results before and after the fusion and discuss the recommended method for converting from a multilayer perceptron decoder to a semi-supervised decoder architecture. The total testing accuracy produced by the Ada-Match semi-supervised decoder approach was 80.3%. We conduct studies to demonstrate that our methodology not only outperforms prior state-of-the-art approaches in terms of precision but also significantly decreases processing time and memory requirements

“Crop Classification Using Semi Supervised Learning On Data Fusion Of SAR And Optical Sensor” Metadata:

  • Title: ➤  Crop Classification Using Semi Supervised Learning On Data Fusion Of SAR And Optical Sensor
  • Author:
  • Language: English

“Crop Classification Using Semi Supervised Learning On Data Fusion Of SAR And Optical Sensor” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 14.13 Mbs, the file-s for this book were downloaded 88 times, the file-s went public at Thu Jul 06 2023.

Available formats:
Archive BitTorrent - DjVuTXT - Djvu XML - Item Tile - Metadata - OCR Page Index - OCR Search Text - Page Numbers JSON - Scandata - Single Page Processed JP2 ZIP - Text PDF - chOCR - hOCR -

Related Links:

Online Marketplaces

Find Crop Classification Using Semi Supervised Learning On Data Fusion Of SAR And Optical Sensor at online marketplaces:


8ERIC ED615586: More With Less: Exploring How To Use Deep Learning Effectively Through Semi-Supervised Learning For Automatic Bug Detection In Student Code

By

Automatically detecting bugs in student program code is critical to enable formative feedback to help students pinpoint errors and resolve them. Deep learning models especially code2vec and ASTNN have shown great success for "large-scale" code classification. It is not clear, however, whether they can be effectively used for bug detection when the amount of labeled data is limited. In this work, we investigated the effectiveness of code2vec and ASTNN against classic machine learning models by varying the amount of labeled data from 1% up to 100%. With a few exceptions, the two deep learning models outperform the classic models. More interestingly, our results showed that when the amount of labeled data is small, code2vec is more effective, while ASTNN is more effective with more training data; for both code2vec and ASTNN, the more labeled data, the better. To further improve their effectiveness, we investigated the potential of semi-supervised learning which can leverage a large amount of unlabeled data to improve their performance. Our results showed that semi-supervised learning is indeed beneficial especially for ASTNN. [For the full proceedings, see ED615472.]

“ERIC ED615586: More With Less: Exploring How To Use Deep Learning Effectively Through Semi-Supervised Learning For Automatic Bug Detection In Student Code” Metadata:

  • Title: ➤  ERIC ED615586: More With Less: Exploring How To Use Deep Learning Effectively Through Semi-Supervised Learning For Automatic Bug Detection In Student Code
  • Author:
  • Language: English

“ERIC ED615586: More With Less: Exploring How To Use Deep Learning Effectively Through Semi-Supervised Learning For Automatic Bug Detection In Student Code” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 18.27 Mbs, the file-s for this book were downloaded 38 times, the file-s went public at Wed Jul 13 2022.

Available formats:
Archive BitTorrent - DjVuTXT - Djvu XML - Item Tile - Metadata - OCR Page Index - OCR Search Text - Page Numbers JSON - Scandata - Single Page Processed JP2 ZIP - Text PDF - chOCR - hOCR -

Related Links:

Online Marketplaces

Find ERIC ED615586: More With Less: Exploring How To Use Deep Learning Effectively Through Semi-Supervised Learning For Automatic Bug Detection In Student Code at online marketplaces:


9The Pessimistic Limits Of Margin-based Losses In Semi-supervised Learning

By

We show that for linear classifiers defined by convex margin-based surrogate losses that are monotonically decreasing, it is impossible to construct any semi-supervised approach that is able to guarantee an improvement over the supervised classifier measured by this surrogate loss. For non-monotonically decreasing loss functions, we demonstrate safe improvements are possible.

“The Pessimistic Limits Of Margin-based Losses In Semi-supervised Learning” Metadata:

  • Title: ➤  The Pessimistic Limits Of Margin-based Losses In Semi-supervised Learning
  • Authors:

“The Pessimistic Limits Of Margin-based Losses In Semi-supervised Learning” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.28 Mbs, the file-s for this book were downloaded 20 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find The Pessimistic Limits Of Margin-based Losses In Semi-supervised Learning at online marketplaces:


10Semi-supervised Learning For Potential Human MicroRNA-disease Associations Inference.

By

This article is from Scientific Reports , volume 4 . Abstract MicroRNAs play critical role in the development and progression of various diseases. Predicting potential miRNA-disease associations from vast amount of biological data is an important problem in the biomedical research. Considering the limitations in previous methods, we developed Regularized Least Squares for MiRNA-Disease Association (RLSMDA) to uncover the relationship between diseases and miRNAs. RLSMDA can work for diseases without known related miRNAs. Furthermore, it is a semi-supervised (does not need negative samples) and global method (prioritize associations for all the diseases simultaneously). Based on leave-one-out cross validation, reliable AUC have demonstrated the reliable performance of RLSMDA. We also applied RLSMDA to Hepatocellular cancer and Lung cancer and implemented global prediction for all the diseases simultaneously. As a result, 80% (Hepatocellular cancer) and 84% (Lung cancer) of top 50 predicted miRNAs and 75% of top 20 potential associations based on global prediction have been confirmed by biological experiments. We also applied RLSMDA to diseases without known related miRNAs in golden standard dataset. As a result, in the top 3 potential related miRNA list predicted by RLSMDA for 32 diseases, 34 disease-miRNA associations were successfully confirmed by experiments. It is anticipated that RLSMDA would be a useful bioinformatics resource for biomedical researches.

“Semi-supervised Learning For Potential Human MicroRNA-disease Associations Inference.” Metadata:

  • Title: ➤  Semi-supervised Learning For Potential Human MicroRNA-disease Associations Inference.
  • Authors:
  • Language: English

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 10.72 Mbs, the file-s for this book were downloaded 77 times, the file-s went public at Sat Oct 18 2014.

Available formats:
Abbyy GZ - Animated GIF - Archive BitTorrent - DjVu - DjVuTXT - Djvu XML - Item Tile - JSON - Metadata - Scandata - Single Page Processed JP2 ZIP - Text PDF -

Related Links:

Online Marketplaces

Find Semi-supervised Learning For Potential Human MicroRNA-disease Associations Inference. at online marketplaces:


11Hand Pose Estimation Through Semi-Supervised And Weakly-Supervised Learning

By

We propose a method for hand pose estimation based on a deep regressor trained on two different kinds of input. Raw depth data is fused with an intermediate representation in the form of a segmentation of the hand into parts. This intermediate representation contains important topological information and provides useful cues for reasoning about joint locations. The mapping from raw depth to segmentation maps is learned in a semi/weakly-supervised way from two different datasets: (i) a synthetic dataset created through a rendering pipeline including densely labeled ground truth (pixelwise segmentations); and (ii) a dataset with real images for which ground truth joint positions are available, but not dense segmentations. Loss for training on real images is generated from a patch-wise restoration process, which aligns tentative segmentation maps with a large dictionary of synthetic poses. The underlying premise is that the domain shift between synthetic and real data is smaller in the intermediate representation, where labels carry geometric and topological meaning, than in the raw input domain. Experiments on the NYU dataset show that the proposed training method decreases error on joints over direct regression of joints from depth data by 15.7%.

“Hand Pose Estimation Through Semi-Supervised And Weakly-Supervised Learning” Metadata:

  • Title: ➤  Hand Pose Estimation Through Semi-Supervised And Weakly-Supervised Learning
  • Authors:

“Hand Pose Estimation Through Semi-Supervised And Weakly-Supervised Learning” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 1.44 Mbs, the file-s for this book were downloaded 18 times, the file-s went public at Thu Jun 28 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Hand Pose Estimation Through Semi-Supervised And Weakly-Supervised Learning at online marketplaces:


12Temporal Ensembling For Semi-Supervised Learning

By

In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.

“Temporal Ensembling For Semi-Supervised Learning” Metadata:

  • Title: ➤  Temporal Ensembling For Semi-Supervised Learning
  • Authors:

“Temporal Ensembling For Semi-Supervised Learning” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.36 Mbs, the file-s for this book were downloaded 59 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Temporal Ensembling For Semi-Supervised Learning at online marketplaces:


13Scalable Semi-Supervised Learning Over Networks Using Nonsmooth Convex Optimization

By

We propose a scalable method for semi-supervised (transductive) learning from massive network-structured datasets. Our approach to semi-supervised learning is based on representing the underlying hypothesis as a graph signal with small total variation. Requiring a small total variation of the graph signal representing the underlying hypothesis corresponds to the central smoothness assumption that forms the basis for semi-supervised learning, i.e., input points forming clusters have similar output values or labels. We formulate the learning problem as a nonsmooth convex optimization problem which we solve by appealing to Nesterovs optimal first-order method for nonsmooth optimization. We also provide a message passing formulation of the learning method which allows for a highly scalable implementation in big data frameworks.

“Scalable Semi-Supervised Learning Over Networks Using Nonsmooth Convex Optimization” Metadata:

  • Title: ➤  Scalable Semi-Supervised Learning Over Networks Using Nonsmooth Convex Optimization
  • Authors:

“Scalable Semi-Supervised Learning Over Networks Using Nonsmooth Convex Optimization” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 1.00 Mbs, the file-s for this book were downloaded 23 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Scalable Semi-Supervised Learning Over Networks Using Nonsmooth Convex Optimization at online marketplaces:


14Semi-supervised Learning Of Deep Metrics For Stereo Reconstruction

By

Deep-learning metrics have recently demonstrated extremely good performance to match image patches for stereo reconstruction. However, training such metrics requires large amount of labeled stereo images, which can be difficult or costly to collect for certain applications. The main contribution of our work is a new semi-supervised method for learning deep metrics from unlabeled stereo images, given coarse information about the scenes and the optical system. Our method alternatively optimizes the metric with a standard stochastic gradient descent, and applies stereo constraints to regularize its prediction. Experiments on reference data-sets show that, for a given network architecture, training with this new method without ground-truth produces a metric with performance as good as state-of-the-art baselines trained with the said ground-truth. This work has three practical implications. Firstly, it helps to overcome limitations of training sets, in particular noisy ground truth. Secondly it allows to use much more training data during learning. Thirdly, it allows to tune deep metric for a particular stereo system, even if ground truth is not available.

“Semi-supervised Learning Of Deep Metrics For Stereo Reconstruction” Metadata:

  • Title: ➤  Semi-supervised Learning Of Deep Metrics For Stereo Reconstruction
  • Authors:

“Semi-supervised Learning Of Deep Metrics For Stereo Reconstruction” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.51 Mbs, the file-s for this book were downloaded 25 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Semi-supervised Learning Of Deep Metrics For Stereo Reconstruction at online marketplaces:


15Virtual Adversarial Training: A Regularization Method For Supervised And Semi-supervised Learning

By

We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the output distribution. Virtual adversarial loss is defined as the robustness of the model's posterior distribution against local perturbation around each input data point. Our method is similar to adversarial training, but differs from adversarial training in that it determines the adversarial direction based only on the output distribution and that it is applicable to a semi-supervised setting. Because the directions in which we smooth the model are virtually adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward and backpropagations. In our experiments, we applied VAT to supervised and semi-supervised learning on multiple benchmark datasets. With additional improvement based on entropy minimization principle, our VAT achieves the state-of-the-art performance on SVHN and CIFAR-10 for semi-supervised learning tasks.

“Virtual Adversarial Training: A Regularization Method For Supervised And Semi-supervised Learning” Metadata:

  • Title: ➤  Virtual Adversarial Training: A Regularization Method For Supervised And Semi-supervised Learning
  • Authors:

“Virtual Adversarial Training: A Regularization Method For Supervised And Semi-supervised Learning” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 7.76 Mbs, the file-s for this book were downloaded 25 times, the file-s went public at Sat Jun 30 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Virtual Adversarial Training: A Regularization Method For Supervised And Semi-supervised Learning at online marketplaces:


16Semi-Supervised Learning With Measure Propagation

By

We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the output distribution. Virtual adversarial loss is defined as the robustness of the model's posterior distribution against local perturbation around each input data point. Our method is similar to adversarial training, but differs from adversarial training in that it determines the adversarial direction based only on the output distribution and that it is applicable to a semi-supervised setting. Because the directions in which we smooth the model are virtually adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward and backpropagations. In our experiments, we applied VAT to supervised and semi-supervised learning on multiple benchmark datasets. With additional improvement based on entropy minimization principle, our VAT achieves the state-of-the-art performance on SVHN and CIFAR-10 for semi-supervised learning tasks.

“Semi-Supervised Learning With Measure Propagation” Metadata:

  • Title: ➤  Semi-Supervised Learning With Measure Propagation
  • Authors:

Edition Identifiers:

Downloads Information:

The book is available for download in "data" format, the size of the file-s is: 0.02 Mbs, the file-s for this book were downloaded 25 times, the file-s went public at Tue Aug 11 2020.

Available formats:
Archive BitTorrent - BitTorrent - Metadata - Unknown -

Related Links:

Online Marketplaces

Find Semi-Supervised Learning With Measure Propagation at online marketplaces:


17Microsoft Research Video 116066: Semi-supervised Learning In Gigantic Image Collections

By

With the advent of the Internet it is now possible to collect hundreds of millions of images. These images come with varying degrees of label information. “Clean labels” can be manually obtained on a small fraction, “noisy labels” may be extracted automatically from surrounding text, while for most images there are no labels at all. Semisupervised learning is a principled framework for combining these different label sources. However, it scales polynomially with the number of images, making it impractical for use on gigantic collections with hundreds of millions of images and thousands of classes. In this paper we show how to utilize recent results in machine learning to obtain highly efficient approximations for semi-supervised learning. Specifically, we use the convergence of the eigenvectors of the normalized graph Laplacian to eigenfunctions of weighted Laplace-Beltrami operators. We combine this with a label sharing framework obtained from Wordnet to propagate label information to classes lacking manual annotations. Our algorithm enables us to apply semi-supervised learning to a database of 80 million images with 74 thousand classes. Joint work with Yair Weiss and Antonio Torralba. ©2009 Microsoft Corporation. All rights reserved.

“Microsoft Research Video 116066: Semi-supervised Learning In Gigantic Image Collections” Metadata:

  • Title: ➤  Microsoft Research Video 116066: Semi-supervised Learning In Gigantic Image Collections
  • Author:
  • Language: English

“Microsoft Research Video 116066: Semi-supervised Learning In Gigantic Image Collections” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "movies" format, the size of the file-s is: 761.29 Mbs, the file-s for this book were downloaded 58 times, the file-s went public at Sat Sep 27 2014.

Available formats:
Animated GIF - Archive BitTorrent - Item Tile - Metadata - Ogg Video - Thumbnail - Windows Media - h.264 -

Related Links:

Online Marketplaces

Find Microsoft Research Video 116066: Semi-supervised Learning In Gigantic Image Collections at online marketplaces:


18Semi-supervised Learning Of A Non-native Phonetic Contrast

By

With the advent of the Internet it is now possible to collect hundreds of millions of images. These images come with varying degrees of label information. “Clean labels” can be manually obtained on a small fraction, “noisy labels” may be extracted automatically from surrounding text, while for most images there are no labels at all. Semisupervised learning is a principled framework for combining these different label sources. However, it scales polynomially with the number of images, making it impractical for use on gigantic collections with hundreds of millions of images and thousands of classes. In this paper we show how to utilize recent results in machine learning to obtain highly efficient approximations for semi-supervised learning. Specifically, we use the convergence of the eigenvectors of the normalized graph Laplacian to eigenfunctions of weighted Laplace-Beltrami operators. We combine this with a label sharing framework obtained from Wordnet to propagate label information to classes lacking manual annotations. Our algorithm enables us to apply semi-supervised learning to a database of 80 million images with 74 thousand classes. Joint work with Yair Weiss and Antonio Torralba. ©2009 Microsoft Corporation. All rights reserved.

“Semi-supervised Learning Of A Non-native Phonetic Contrast” Metadata:

  • Title: ➤  Semi-supervised Learning Of A Non-native Phonetic Contrast
  • Author:

Edition Identifiers:

Downloads Information:

The book is available for download in "data" format, the size of the file-s is: 0.52 Mbs, the file-s for this book were downloaded 2 times, the file-s went public at Thu Aug 19 2021.

Available formats:
Archive BitTorrent - Metadata - ZIP -

Related Links:

Online Marketplaces

Find Semi-supervised Learning Of A Non-native Phonetic Contrast at online marketplaces:


19Galaxy Morphology Classification Using A Semi-supervised Learning Algorithm Based On Dynamic Threshold

By

Galaxy Morphology Classification Using a Semi-supervised Learning Algorithm Based on Dynamic Threshold 作者: Jie Jiang, Jinqu Zhang, Xiangru Li, Hui Li and Ping Du 1 作者单位: 1. Jie Jiang, Jinqu Zhang, Xiangru Li, Hui Li and Ping Du 提交时间: 2023-12-15 11:28:08 摘要: Machine learning has become a crucial technique for classifying the morphology of galaxies as a result of the meteoric development of galactic data. Unfortunately, traditional supervised learning has significant learning costs since it needs a lot of labeled data to be effective. FixMatch, a semi-supervised learning algorithm that serves as a good method, is now a key tool for using large amounts of unlabeled data. Nevertheless, the performance degrades significantly when dealing with large, imbalanced data sets since FixMatch relies on a fixed threshold to filter pseudo-labels. Therefore, this study proposes a dynamic threshold alignment algorithm based on the FixMatch model. First, the class with the highest amount has its reliable pseudo-label ratio determined, and the remaining classes' reliable pseudo-label ratios are approximated in accordance. Second, based on the predicted reliable pseudo-label ratio for each category, it dynamically calculates the threshold for choosing pseudo-labels. By employing this dynamic threshold, the accuracy bias of each category is decreased and the learning of classes with less samples is improved. Experimental results show that in galaxy morphology classification tasks, compared with supervised learning, the proposed algorithm significantly improves performance. When the amount of labeled data is 100, the accuracy and F1-score are improved by 12.8% and 12.6%, respectively. Compared with popular semi-supervised algorithms such as FixMatch and MixMatch, the proposed algorithm has better classification performance, greatly reducing the accuracy bias of each category. When the amount of labeled data is 1000, the accuracy of cigar-shaped smooth galaxies with the smallest sample is improved by 37.94% compared to FixMatch. 期刊: Research in Astronomy and Astrophysics 分类: 物理学 >> 地球物理学、天文学和天体物理学 投稿状态: 已在期刊出版 引用: ChinaXiv:202312.00144 (或此版本 ChinaXiv:202312.00144V1 ) DOI:https://doi.org/10.1088/1674-4527/acf610 CSTR:32003.36.ChinaXiv.202312.00144.V1 推荐引用方式: Jie Jiang, Jinqu Zhang, Xiangru Li, Hui Li and Ping Du.(2023).Galaxy Morphology Classification Using a Semi-supervised Learning Algorithm Based on Dynamic Threshold.Research in Astronomy and Astrophysics.doi:https://doi.org/10.1088/1674-4527/acf610 版本历史 [V1] 2023-12-15 11:28:08 ChinaXiv:202312.00144V1 下载全文

“Galaxy Morphology Classification Using A Semi-supervised Learning Algorithm Based On Dynamic Threshold” Metadata:

  • Title: ➤  Galaxy Morphology Classification Using A Semi-supervised Learning Algorithm Based On Dynamic Threshold
  • Author: ➤  

“Galaxy Morphology Classification Using A Semi-supervised Learning Algorithm Based On Dynamic Threshold” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 12.91 Mbs, the file-s for this book were downloaded 14 times, the file-s went public at Sat Jun 15 2024.

Available formats:
DjVuTXT - Djvu XML - Item Tile - Metadata - OCR Page Index - OCR Search Text - Page Numbers JSON - Scandata - Single Page Processed JP2 ZIP - Text PDF - chOCR - hOCR -

Related Links:

Online Marketplaces

Find Galaxy Morphology Classification Using A Semi-supervised Learning Algorithm Based On Dynamic Threshold at online marketplaces:


20[SAIF 2019] Day2: Geometric Deep Learning For Forecasting And Semi-supervised Learning - Joan Bruna

By

Geometric Deep Learning is an emerging paradigm to process graph-structured data with end-to-end trainable models, Graph Neural Networks (GNNs), with the ability to leverage prior knowledge about the data domain while offering large expressive power. Such attractive tradeoff has resulted in state-of-the-art performance over diverse domains, ranging from social networks, biology, knowledge bases, or finance. In this talk, I will present recent advances in our group covering both theory and applications. On the theory side, we quantify both the approximation power of GNN architectures and their stability to graph perturbations, resulting in a principled architecture design. We will illustrate these theoretical advances with applications in semi-supervised learning and forecasting in dynamic graphs modeling Social Network data, and discuss broader applications to recommender systems and fraud detection. #SAIF #SamsungAIForum #SR #SamsungResearch Source: https://www.youtube.com/watch?v=3fKEzsRlefk Uploader: Samsung

“[SAIF 2019] Day2: Geometric Deep Learning For Forecasting And Semi-supervised Learning - Joan Bruna” Metadata:

  • Title: ➤  [SAIF 2019] Day2: Geometric Deep Learning For Forecasting And Semi-supervised Learning - Joan Bruna
  • Author:

“[SAIF 2019] Day2: Geometric Deep Learning For Forecasting And Semi-supervised Learning - Joan Bruna” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "movies" format, the size of the file-s is: 293.50 Mbs, the file-s for this book were downloaded 12 times, the file-s went public at Fri Sep 16 2022.

Available formats:
Archive BitTorrent - Item Tile - JSON - MPEG4 - Metadata - Thumbnail - Unknown -

Related Links:

Online Marketplaces

Find [SAIF 2019] Day2: Geometric Deep Learning For Forecasting And Semi-supervised Learning - Joan Bruna at online marketplaces:


21Microsoft Research Audio 116066: Semi-supervised Learning In Gigantic Image Collections

By

With the advent of the Internet it is now possible to collect hundreds of millions of images. These images come with varying degrees of label information. “Clean labels” can be manually obtained on a small fraction, “noisy labels” may be extracted automatically from surrounding text, while for most images there are no labels at all. Semisupervised learning is a principled framework for combining these different label sources. However, it scales polynomially with the number of images, making it impractical for use on gigantic collections with hundreds of millions of images and thousands of classes. In this paper we show how to utilize recent results in machine learning to obtain highly efficient approximations for semi-supervised learning. Specifically, we use the convergence of the eigenvectors of the normalized graph Laplacian to eigenfunctions of weighted Laplace-Beltrami operators. We combine this with a label sharing framework obtained from Wordnet to propagate label information to classes lacking manual annotations. Our algorithm enables us to apply semi-supervised learning to a database of 80 million images with 74 thousand classes.

“Microsoft Research Audio 116066: Semi-supervised Learning In Gigantic Image Collections” Metadata:

  • Title: ➤  Microsoft Research Audio 116066: Semi-supervised Learning In Gigantic Image Collections
  • Author:
  • Language: English

“Microsoft Research Audio 116066: Semi-supervised Learning In Gigantic Image Collections” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "audio" format, the size of the file-s is: 44.94 Mbs, the file-s for this book were downloaded 10 times, the file-s went public at Wed Dec 02 2015.

Available formats:
Archive BitTorrent - Columbia Peaks - Essentia High GZ - Essentia Low GZ - Item Tile - Metadata - Ogg Vorbis - PNG - Spectrogram - VBR MP3 -

Related Links:

Online Marketplaces

Find Microsoft Research Audio 116066: Semi-supervised Learning In Gigantic Image Collections at online marketplaces:


22Semi-supervised And Unsupervised Machine Learning : Novel Strategies

By

With the advent of the Internet it is now possible to collect hundreds of millions of images. These images come with varying degrees of label information. “Clean labels” can be manually obtained on a small fraction, “noisy labels” may be extracted automatically from surrounding text, while for most images there are no labels at all. Semisupervised learning is a principled framework for combining these different label sources. However, it scales polynomially with the number of images, making it impractical for use on gigantic collections with hundreds of millions of images and thousands of classes. In this paper we show how to utilize recent results in machine learning to obtain highly efficient approximations for semi-supervised learning. Specifically, we use the convergence of the eigenvectors of the normalized graph Laplacian to eigenfunctions of weighted Laplace-Beltrami operators. We combine this with a label sharing framework obtained from Wordnet to propagate label information to classes lacking manual annotations. Our algorithm enables us to apply semi-supervised learning to a database of 80 million images with 74 thousand classes.

“Semi-supervised And Unsupervised Machine Learning : Novel Strategies” Metadata:

  • Title: ➤  Semi-supervised And Unsupervised Machine Learning : Novel Strategies
  • Author:
  • Language: English

“Semi-supervised And Unsupervised Machine Learning : Novel Strategies” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 506.26 Mbs, the file-s for this book were downloaded 15 times, the file-s went public at Sat Jul 22 2023.

Available formats:
ACS Encrypted PDF - Cloth Cover Detection Log - DjVuTXT - Djvu XML - Dublin Core - Item Tile - JPEG Thumb - JSON - LCP Encrypted EPUB - LCP Encrypted PDF - Log - MARC - MARC Binary - Metadata - OCR Page Index - OCR Search Text - PNG - Page Numbers JSON - RePublisher Final Processing Log - RePublisher Initial Processing Log - Scandata - Single Page Original JP2 Tar - Single Page Processed JP2 ZIP - Text PDF - Title Page Detection Log - chOCR - hOCR -

Related Links:

Online Marketplaces

Find Semi-supervised And Unsupervised Machine Learning : Novel Strategies at online marketplaces:


23Nonparametric Semi-supervised Learning Of Class Proportions

By

The problem of developing binary classifiers from positive and unlabeled data is often encountered in machine learning. A common requirement in this setting is to approximate posterior probabilities of positive and negative classes for a previously unseen data point. This problem can be decomposed into two steps: (i) the development of accurate predictors that discriminate between positive and unlabeled data, and (ii) the accurate estimation of the prior probabilities of positive and negative examples. In this work we primarily focus on the latter subproblem. We study nonparametric class prior estimation and formulate this problem as an estimation of mixing proportions in two-component mixture models, given a sample from one of the components and another sample from the mixture itself. We show that estimation of mixing proportions is generally ill-defined and propose a canonical form to obtain identifiability while maintaining the flexibility to model any distribution. We use insights from this theory to elucidate the optimization surface of the class priors and propose an algorithm for estimating them. To address the problems of high-dimensional density estimation, we provide practical transformations to low-dimensional spaces that preserve class priors. Finally, we demonstrate the efficacy of our method on univariate and multivariate data.

“Nonparametric Semi-supervised Learning Of Class Proportions” Metadata:

  • Title: ➤  Nonparametric Semi-supervised Learning Of Class Proportions
  • Authors:

“Nonparametric Semi-supervised Learning Of Class Proportions” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 3.59 Mbs, the file-s for this book were downloaded 29 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Nonparametric Semi-supervised Learning Of Class Proportions at online marketplaces:


24Efficiently Detecting Overlapping Communities Through Seeding And Semi-Supervised Learning

By

Seeding then expanding is a commonly used scheme to discover overlapping communities in a network. Most seeding methods are either too complex to scale to large networks or too simple to select high-quality seeds, and the non-principled functions used by most expanding methods lead to poor performance when applied to diverse networks. This paper proposes a new method that transforms a network into a corpus where each edge is treated as a document, and all nodes of the network are treated as terms of the corpus. An effective seeding method is also proposed that selects seeds as a training set, then a principled expanding method based on semi-supervised learning is applied to classify edges. We compare our new algorithm with four other community detection algorithms on a wide range of synthetic and empirical networks. Experimental results show that the new algorithm can significantly improve clustering performance in most cases. Furthermore, the time complexity of the new algorithm is linear to the number of edges, and this low complexity makes the new algorithm scalable to large networks.

“Efficiently Detecting Overlapping Communities Through Seeding And Semi-Supervised Learning” Metadata:

  • Title: ➤  Efficiently Detecting Overlapping Communities Through Seeding And Semi-Supervised Learning
  • Authors:

“Efficiently Detecting Overlapping Communities Through Seeding And Semi-Supervised Learning” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.43 Mbs, the file-s for this book were downloaded 19 times, the file-s went public at Sat Jun 30 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Efficiently Detecting Overlapping Communities Through Seeding And Semi-Supervised Learning at online marketplaces:


25Exponential Family Hybrid Semi-Supervised Learning

By

We present an approach to semi-supervised learning based on an exponential family characterization. Our approach generalizes previous work on coupled priors for hybrid generative/discriminative models. Our model is more flexible and natural than previous approaches. Experimental results on several data sets show that our approach also performs better in practice.

“Exponential Family Hybrid Semi-Supervised Learning” Metadata:

  • Title: ➤  Exponential Family Hybrid Semi-Supervised Learning
  • Authors:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 6.18 Mbs, the file-s for this book were downloaded 77 times, the file-s went public at Tue Sep 17 2013.

Available formats:
Abbyy GZ - Animated GIF - Archive BitTorrent - DjVu - DjVuTXT - Djvu XML - Item Tile - Metadata - Scandata - Single Page Processed JP2 ZIP - Text PDF -

Related Links:

Online Marketplaces

Find Exponential Family Hybrid Semi-Supervised Learning at online marketplaces:


26Semi-supervised Deep Learning By Metric Embedding

By

Deep networks are successfully used as classification models yielding state-of-the-art results when trained on a large number of labeled samples. These models, however, are usually much less suited for semi-supervised problems because of their tendency to overfit easily when trained on small amounts of data. In this work we will explore a new training objective that is targeting a semi-supervised regime with only a small subset of labeled data. This criterion is based on a deep metric embedding over distance relations within the set of labeled samples, together with constraints over the embeddings of the unlabeled set. The final learned representations are discriminative in euclidean space, and hence can be used with subsequent nearest-neighbor classification using the labeled samples.

“Semi-supervised Deep Learning By Metric Embedding” Metadata:

  • Title: ➤  Semi-supervised Deep Learning By Metric Embedding
  • Authors:

“Semi-supervised Deep Learning By Metric Embedding” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.31 Mbs, the file-s for this book were downloaded 25 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Semi-supervised Deep Learning By Metric Embedding at online marketplaces:


27Mutual Exclusivity Loss For Semi-Supervised Deep Learning

By

In this paper we consider the problem of semi-supervised learning with deep Convolutional Neural Networks (ConvNets). Semi-supervised learning is motivated on the observation that unlabeled data is cheap and can be used to improve the accuracy of classifiers. In this paper we propose an unsupervised regularization term that explicitly forces the classifier's prediction for multiple classes to be mutually-exclusive and effectively guides the decision boundary to lie on the low density space between the manifolds corresponding to different classes of data. Our proposed approach is general and can be used with any backpropagation-based learning method. We show through different experiments that our method can improve the object recognition performance of ConvNets using unlabeled data.

“Mutual Exclusivity Loss For Semi-Supervised Deep Learning” Metadata:

  • Title: ➤  Mutual Exclusivity Loss For Semi-Supervised Deep Learning
  • Authors:

“Mutual Exclusivity Loss For Semi-Supervised Deep Learning” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.51 Mbs, the file-s for this book were downloaded 24 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Mutual Exclusivity Loss For Semi-Supervised Deep Learning at online marketplaces:


28Causal Discovery As Semi-Supervised Learning

By

In this short report, we discuss an approach to estimating causal graphs in which indicators of causal influence between variables are treated as labels in a machine learning formulation. Available data on the variables of interest are used as "inputs" to estimate the labels. We frame the problem as one of semi-supervised learning: available interventional data or background knowledge provide labels on some edges in the graph and the remaining edges are treated as unlabelled objects. To illustrate the key ideas, we consider a simple approach to feature construction (rooted in bivariate kernel density estimation) and embed this within a semi-supervised manifold framework. Results on yeast knockout data demonstrate that the proposed approach can identify causal relationships as validated by unseen interventional experiments. An advantage of the formulation we propose is that by reframing causal discovery as semi-supervised learning, it allows a range of data-driven approaches to be brought to bear on causal discovery, without demanding specification of full probability models or explicit models of underlying mechanisms.

“Causal Discovery As Semi-Supervised Learning” Metadata:

  • Title: ➤  Causal Discovery As Semi-Supervised Learning
  • Authors:

“Causal Discovery As Semi-Supervised Learning” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.33 Mbs, the file-s for this book were downloaded 17 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Causal Discovery As Semi-Supervised Learning at online marketplaces:


29Semi-supervised Learning With Explicit Relationship Regularization

By

In many learning tasks, the structure of the target space of a function holds rich information about the relationships between evaluations of functions on different data points. Existing approaches attempt to exploit this relationship information implicitly by enforcing smoothness on function evaluations only. However, what happens if we explicitly regularize the relationships between function evaluations? Inspired by homophily, we regularize based on a smooth relationship function, either defined from the data or with labels. In experiments, we demonstrate that this significantly improves the performance of state-of-the-art algorithms in semi-supervised classification and in spectral data embedding for constrained clustering and dimensionality reduction.

“Semi-supervised Learning With Explicit Relationship Regularization” Metadata:

  • Title: ➤  Semi-supervised Learning With Explicit Relationship Regularization
  • Authors:

“Semi-supervised Learning With Explicit Relationship Regularization” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.57 Mbs, the file-s for this book were downloaded 15 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Semi-supervised Learning With Explicit Relationship Regularization at online marketplaces:


30Manifold Based Low-rank Regularization For Image Restoration And Semi-supervised Learning

By

Low-rank structures play important role in recent advances of many problems in image science and data science. As a natural extension of low-rank structures for data with nonlinear structures, the concept of the low-dimensional manifold structure has been considered in many data processing problems. Inspired by this concept, we consider a manifold based low-rank regularization as a linear approximation of manifold dimension. This regularization is less restricted than the global low-rank regularization, and thus enjoy more flexibility to handle data with nonlinear structures. As applications, we demonstrate the proposed regularization to classical inverse problems in image sciences and data sciences including image inpainting, image super-resolution, X-ray computer tomography (CT) image reconstruction and semi-supervised learning. We conduct intensive numerical experiments in several image restoration problems and a semi-supervised learning problem of classifying handwritten digits using the MINST data. Our numerical tests demonstrate the effectiveness of the proposed methods and illustrate that the new regularization methods produce outstanding results by comparing with many existing methods.

“Manifold Based Low-rank Regularization For Image Restoration And Semi-supervised Learning” Metadata:

  • Title: ➤  Manifold Based Low-rank Regularization For Image Restoration And Semi-supervised Learning
  • Authors:

“Manifold Based Low-rank Regularization For Image Restoration And Semi-supervised Learning” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 3.65 Mbs, the file-s for this book were downloaded 22 times, the file-s went public at Sat Jun 30 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Manifold Based Low-rank Regularization For Image Restoration And Semi-supervised Learning at online marketplaces:


31Active Semi-Supervised Learning Method With Hybrid Deep Belief Networks.

By

This article is from PLoS ONE , volume 9 . Abstract In this paper, we develop a novel semi-supervised learning algorithm called active hybrid deep belief networks (AHD), to address the semi-supervised sentiment classification problem with deep learning. First, we construct the previous several hidden layers using restricted Boltzmann machines (RBM), which can reduce the dimension and abstract the information of the reviews quickly. Second, we construct the following hidden layers using convolutional restricted Boltzmann machines (CRBM), which can abstract the information of reviews effectively. Third, the constructed deep architecture is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Finally, active learning method is combined based on the proposed deep architecture. We did several experiments on five sentiment classification datasets, and show that AHD is competitive with previous semi-supervised learning algorithm. Experiments are also conducted to verify the effectiveness of our proposed method with different number of labeled reviews and unlabeled reviews respectively.

“Active Semi-Supervised Learning Method With Hybrid Deep Belief Networks.” Metadata:

  • Title: ➤  Active Semi-Supervised Learning Method With Hybrid Deep Belief Networks.
  • Authors:
  • Language: English

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 6.07 Mbs, the file-s for this book were downloaded 72 times, the file-s went public at Fri Oct 03 2014.

Available formats:
Abbyy GZ - Animated GIF - Archive BitTorrent - DjVu - DjVuTXT - Djvu XML - Item Tile - JSON - Metadata - Scandata - Single Page Processed JP2 ZIP - Text PDF -

Related Links:

Online Marketplaces

Find Active Semi-Supervised Learning Method With Hybrid Deep Belief Networks. at online marketplaces:


32Hybrid Approach For Inductive Semi Supervised Learning Using Label Propagation And Support Vector Machine

By

Semi supervised learning methods have gained importance in today's world because of large expenses and time involved in labeling the unlabeled data by human experts. The proposed hybrid approach uses SVM and Label Propagation to label the unlabeled data. In the process, at each step SVM is trained to minimize the error and thus improve the prediction quality. Experiments are conducted by using SVM and logistic regression(Logreg). Results prove that SVM performs tremendously better than Logreg. The approach is tested using 12 datasets of different sizes ranging from the order of 1000s to the order of 10000s. Results show that the proposed approach outperforms Label Propagation by a large margin with F-measure of almost twice on average. The parallel version of the proposed approach is also designed and implemented, the analysis shows that the training time decreases significantly when parallel version is used.

“Hybrid Approach For Inductive Semi Supervised Learning Using Label Propagation And Support Vector Machine” Metadata:

  • Title: ➤  Hybrid Approach For Inductive Semi Supervised Learning Using Label Propagation And Support Vector Machine
  • Authors:

“Hybrid Approach For Inductive Semi Supervised Learning Using Label Propagation And Support Vector Machine” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.56 Mbs, the file-s for this book were downloaded 24 times, the file-s went public at Thu Jun 28 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Hybrid Approach For Inductive Semi Supervised Learning Using Label Propagation And Support Vector Machine at online marketplaces:


33Semi-Supervised Learning On Graphs Through Reach And Distance Diffusion

By

Semi-supervised learning (SSL) is an indispensable tool when there are few labeled entities and many unlabeled entities for which we want to predict labels. With graph-based methods, entities correspond to nodes in a graph and edges represent strong relations. At the heart of SSL algorithms is the specification of a dense {\em kernel} of pairwise affinity values from the graph structure. A learning algorithm is then trained on the kernel together with labeled entities. The most popular kernels are {\em spectral} and include the highly scalable "symmetric" Laplacian methods, that compute a soft labels using Jacobi iterations, and "asymmetric" methods including Personalized Page Rank (PPR) which use short random walks and apply with directed relations, such as like, follow, or hyperlinks. We introduce {\em Reach diffusion} and {\em Distance diffusion} kernels that build on powerful social and economic models of centrality and influence in networks and capture the directed pairwise relations that underline social influence. Inspired by the success of social influence as an alternative to spectral centrality such as Page Rank, we explore SSL with our kernels and develop highly scalable algorithms for parameter setting, label learning, and sampling. We perform preliminary experiments that demonstrate the properties and potential of our kernels.

“Semi-Supervised Learning On Graphs Through Reach And Distance Diffusion” Metadata:

  • Title: ➤  Semi-Supervised Learning On Graphs Through Reach And Distance Diffusion
  • Author:

“Semi-Supervised Learning On Graphs Through Reach And Distance Diffusion” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.93 Mbs, the file-s for this book were downloaded 24 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Semi-Supervised Learning On Graphs Through Reach And Distance Diffusion at online marketplaces:


34Semi-Supervised Learning With Heterophily

By

We derive a family of linear inference algorithms that generalize existing graph-based label propagation algorithms by allowing them to propagate generalized assumptions about "attraction" or "compatibility" between classes of neighboring nodes (in particular those that involve heterophily between nodes where "opposites attract"). We thus call this formulation Semi-Supervised Learning with Heterophily (SSLH) and show how it generalizes and improves upon a recently proposed approach called Linearized Belief Propagation (LinBP). Importantly, our framework allows us to reduce the problem of estimating the relative compatibility between nodes from partially labeled graph to a simple optimization problem. The result is a very fast algorithm that -- despite its simplicity -- is surprisingly effective: we can classify unlabeled nodes within the same graph in the same time as LinBP but with a superior accuracy and despite our algorithm not knowing the compatibilities.

“Semi-Supervised Learning With Heterophily” Metadata:

  • Title: ➤  Semi-Supervised Learning With Heterophily
  • Author:

“Semi-Supervised Learning With Heterophily” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 1.24 Mbs, the file-s for this book were downloaded 18 times, the file-s went public at Sat Jun 30 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Semi-Supervised Learning With Heterophily at online marketplaces:


35Elastic Net Hypergraph Learning For Image Clustering And Semi-supervised Classification

By

Graph model is emerging as a very effective tool for learning the complex structures and relationships hidden in data. Generally, the critical purpose of graph-oriented learning algorithms is to construct an informative graph for image clustering and classification tasks. In addition to the classical $K$-nearest-neighbor and $r$-neighborhood methods for graph construction, $l_1$-graph and its variants are emerging methods for finding the neighboring samples of a center datum, where the corresponding ingoing edge weights are simultaneously derived by the sparse reconstruction coefficients of the remaining samples. However, the pair-wise links of $l_1$-graph are not capable of capturing the high order relationships between the center datum and its prominent data in sparse reconstruction. Meanwhile, from the perspective of variable selection, the $l_1$ norm sparse constraint, regarded as a LASSO model, tends to select only one datum from a group of data that are highly correlated and ignore the others. To simultaneously cope with these drawbacks, we propose a new elastic net hypergraph learning model, which consists of two steps. In the first step, the Robust Matrix Elastic Net model is constructed to find the canonically related samples in a somewhat greedy way, achieving the grouping effect by adding the $l_2$ penalty to the $l_1$ constraint. In the second step, hypergraph is used to represent the high order relationships between each datum and its prominent samples by regarding them as a hyperedge. Subsequently, hypergraph Laplacian matrix is constructed for further analysis. New hypergraph learning algorithms, including unsupervised clustering and multi-class semi-supervised classification, are then derived. Extensive experiments on face and handwriting databases demonstrate the effectiveness of the proposed method.

“Elastic Net Hypergraph Learning For Image Clustering And Semi-supervised Classification” Metadata:

  • Title: ➤  Elastic Net Hypergraph Learning For Image Clustering And Semi-supervised Classification
  • Authors:

“Elastic Net Hypergraph Learning For Image Clustering And Semi-supervised Classification” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.70 Mbs, the file-s for this book were downloaded 20 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Elastic Net Hypergraph Learning For Image Clustering And Semi-supervised Classification at online marketplaces:


36Semi-supervised Learning

Graph model is emerging as a very effective tool for learning the complex structures and relationships hidden in data. Generally, the critical purpose of graph-oriented learning algorithms is to construct an informative graph for image clustering and classification tasks. In addition to the classical $K$-nearest-neighbor and $r$-neighborhood methods for graph construction, $l_1$-graph and its variants are emerging methods for finding the neighboring samples of a center datum, where the corresponding ingoing edge weights are simultaneously derived by the sparse reconstruction coefficients of the remaining samples. However, the pair-wise links of $l_1$-graph are not capable of capturing the high order relationships between the center datum and its prominent data in sparse reconstruction. Meanwhile, from the perspective of variable selection, the $l_1$ norm sparse constraint, regarded as a LASSO model, tends to select only one datum from a group of data that are highly correlated and ignore the others. To simultaneously cope with these drawbacks, we propose a new elastic net hypergraph learning model, which consists of two steps. In the first step, the Robust Matrix Elastic Net model is constructed to find the canonically related samples in a somewhat greedy way, achieving the grouping effect by adding the $l_2$ penalty to the $l_1$ constraint. In the second step, hypergraph is used to represent the high order relationships between each datum and its prominent samples by regarding them as a hyperedge. Subsequently, hypergraph Laplacian matrix is constructed for further analysis. New hypergraph learning algorithms, including unsupervised clustering and multi-class semi-supervised classification, are then derived. Extensive experiments on face and handwriting databases demonstrate the effectiveness of the proposed method.

“Semi-supervised Learning” Metadata:

  • Title: Semi-supervised Learning
  • Language: English

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 1215.49 Mbs, the file-s for this book were downloaded 37 times, the file-s went public at Thu Jul 06 2023.

Available formats:
ACS Encrypted PDF - Cloth Cover Detection Log - DjVuTXT - Djvu XML - Dublin Core - Item Tile - JPEG Thumb - JSON - LCP Encrypted EPUB - LCP Encrypted PDF - Log - MARC - MARC Binary - Metadata - OCR Page Index - OCR Search Text - PNG - Page Numbers JSON - RePublisher Final Processing Log - RePublisher Initial Processing Log - Scandata - Single Page Original JP2 Tar - Single Page Processed JP2 ZIP - Text PDF - Title Page Detection Log - chOCR - hOCR -

Related Links:

Online Marketplaces

Find Semi-supervised Learning at online marketplaces:


37Semi-supervised Learning

Graph model is emerging as a very effective tool for learning the complex structures and relationships hidden in data. Generally, the critical purpose of graph-oriented learning algorithms is to construct an informative graph for image clustering and classification tasks. In addition to the classical $K$-nearest-neighbor and $r$-neighborhood methods for graph construction, $l_1$-graph and its variants are emerging methods for finding the neighboring samples of a center datum, where the corresponding ingoing edge weights are simultaneously derived by the sparse reconstruction coefficients of the remaining samples. However, the pair-wise links of $l_1$-graph are not capable of capturing the high order relationships between the center datum and its prominent data in sparse reconstruction. Meanwhile, from the perspective of variable selection, the $l_1$ norm sparse constraint, regarded as a LASSO model, tends to select only one datum from a group of data that are highly correlated and ignore the others. To simultaneously cope with these drawbacks, we propose a new elastic net hypergraph learning model, which consists of two steps. In the first step, the Robust Matrix Elastic Net model is constructed to find the canonically related samples in a somewhat greedy way, achieving the grouping effect by adding the $l_2$ penalty to the $l_1$ constraint. In the second step, hypergraph is used to represent the high order relationships between each datum and its prominent samples by regarding them as a hyperedge. Subsequently, hypergraph Laplacian matrix is constructed for further analysis. New hypergraph learning algorithms, including unsupervised clustering and multi-class semi-supervised classification, are then derived. Extensive experiments on face and handwriting databases demonstrate the effectiveness of the proposed method.

“Semi-supervised Learning” Metadata:

  • Title: Semi-supervised Learning
  • Language: English

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 1024.43 Mbs, the file-s for this book were downloaded 27 times, the file-s went public at Mon Mar 07 2022.

Available formats:
ACS Encrypted PDF - Cloth Cover Detection Log - DjVuTXT - Djvu XML - Dublin Core - Item Tile - JPEG Thumb - JSON - LCP Encrypted EPUB - LCP Encrypted PDF - Log - MARC - MARC Binary - Metadata - OCR Page Index - OCR Search Text - PNG - Page Numbers JSON - Scandata - Single Page Original JP2 Tar - Single Page Processed JP2 ZIP - Text PDF - Title Page Detection Log - chOCR - hOCR -

Related Links:

Online Marketplaces

Find Semi-supervised Learning at online marketplaces:


38Semi-supervised Learning For Photometric Supernova Classification

By

We present a semi-supervised method for photometric supernova typing. Our approach is to first use the nonlinear dimension reduction technique diffusion map to detect structure in a database of supernova light curves and subsequently employ random forest classification on a spectroscopically confirmed training set to learn a model that can predict the type of each newly observed supernova. We demonstrate that this is an effective method for supernova typing. As supernova numbers increase, our semi-supervised method efficiently utilizes this information to improve classification, a property not enjoyed by template based methods. Applied to supernova data simulated by Kessler et al. (2010b) to mimic those of the Dark Energy Survey, our methods achieve (cross-validated) 95% Type Ia purity and 87% Type Ia efficiency on the spectroscopic sample, but only 50% Type Ia purity and 50% efficiency on the photometric sample due to their spectroscopic follow-up strategy. To improve the performance on the photometric sample, we search for better spectroscopic follow-up procedures by studying the sensitivity of our machine learned supernova classification on the specific strategy used to obtain training sets. With a fixed amount of spectroscopic follow-up time, we find that deeper magnitude-limited spectroscopic surveys are better for producing training sets. For supernova Ia (II-P) typing, we obtain a 44% (1%) increase in purity to 72% (87%) and 30% (162%) increase in efficiency to 65% (84%) of the sample using a 25th (24.5th) magnitude-limited survey instead of the shallower spectroscopic sample used in the original simulations. When redshift information is available, we incorporate it into our analysis using a novel method of altering the diffusion map representation of the supernovae. Incorporating host redshifts leads to a 5% improvement in Type Ia purity and 13% improvement in Type Ia efficiency.

“Semi-supervised Learning For Photometric Supernova Classification” Metadata:

  • Title: ➤  Semi-supervised Learning For Photometric Supernova Classification
  • Authors:
  • Language: English

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 15.98 Mbs, the file-s for this book were downloaded 99 times, the file-s went public at Sun Sep 22 2013.

Available formats:
Abbyy GZ - Animated GIF - Archive BitTorrent - DjVu - DjVuTXT - Djvu XML - Item Tile - Metadata - Scandata - Single Page Processed JP2 ZIP - Text PDF -

Related Links:

Online Marketplaces

Find Semi-supervised Learning For Photometric Supernova Classification at online marketplaces:


39Weakly- And Semi-Supervised Learning Of A DCNN For Semantic Image Segmentation

By

Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at https://bitbucket.org/deeplab/deeplab-public.

“Weakly- And Semi-Supervised Learning Of A DCNN For Semantic Image Segmentation” Metadata:

  • Title: ➤  Weakly- And Semi-Supervised Learning Of A DCNN For Semantic Image Segmentation
  • Authors:
  • Language: English

“Weakly- And Semi-Supervised Learning Of A DCNN For Semantic Image Segmentation” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 23.30 Mbs, the file-s for this book were downloaded 51 times, the file-s went public at Tue Jun 26 2018.

Available formats:
Abbyy GZ - Archive BitTorrent - DjVuTXT - Djvu XML - JPEG Thumb - Metadata - Scandata - Single Page Processed JP2 ZIP - Text PDF -

Related Links:

Online Marketplaces

Find Weakly- And Semi-Supervised Learning Of A DCNN For Semantic Image Segmentation at online marketplaces:


40Integrating MicroRNA Target Predictions For The Discovery Of Gene Regulatory Networks: A Semi-supervised Ensemble Learning Approach.

By

This article is from BMC Bioinformatics , volume 15 . Abstract Background: MicroRNAs (miRNAs) are small non-coding RNAs which play a key role in the post-transcriptional regulation of many genes. Elucidating miRNA-regulated gene networks is crucial for the understanding of mechanisms and functions of miRNAs in many biological processes, such as cell proliferation, development, differentiation and cell homeostasis, as well as in many types of human tumors. To this aim, we have recently presented the biclustering method HOCCLUS2, for the discovery of miRNA regulatory networks. Experiments on predicted interactions revealed that the statistical and biological consistency of the obtained networks is negatively affected by the poor reliability of the output of miRNA target prediction algorithms. Recently, some learning approaches have been proposed to learn to combine the outputs of distinct prediction algorithms and improve their accuracy. However, the application of classical supervised learning algorithms presents two challenges: i) the presence of only positive examples in datasets of experimentally verified interactions and ii) unbalanced number of labeled and unlabeled examples. Results: We present a learning algorithm that learns to combine the score returned by several prediction algorithms, by exploiting information conveyed by (only positively labeled/) validated and unlabeled examples of interactions. To face the two related challenges, we resort to a semi-supervised ensemble learning setting. Results obtained using miRTarBase as the set of labeled (positive) interactions and mirDIP as the set of unlabeled interactions show a significant improvement, over competitive approaches, in the quality of the predictions. This solution also improves the effectiveness of HOCCLUS2 in discovering biologically realistic miRNA:mRNA regulatory networks from large-scale prediction data. Using the miR-17-92 gene cluster family as a reference system and comparing results with previous experiments, we find a large increase in the number of significantly enriched biclusters in pathways, consistent with miR-17-92 functions. Conclusion: The proposed approach proves to be fundamental for the computational discovery of miRNA regulatory networks from large-scale predictions. This paves the way to the systematic application of HOCCLUS2 for a comprehensive reconstruction of all the possible multiple interactions established by miRNAs in regulating the expression of gene networks, which would be otherwise impossible to reconstruct by considering only experimentally validated interactions.

“Integrating MicroRNA Target Predictions For The Discovery Of Gene Regulatory Networks: A Semi-supervised Ensemble Learning Approach.” Metadata:

  • Title: ➤  Integrating MicroRNA Target Predictions For The Discovery Of Gene Regulatory Networks: A Semi-supervised Ensemble Learning Approach.
  • Authors:
  • Language: English

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 14.87 Mbs, the file-s for this book were downloaded 89 times, the file-s went public at Tue Oct 21 2014.

Available formats:
Abbyy GZ - Animated GIF - Archive BitTorrent - DjVu - DjVuTXT - Djvu XML - Item Tile - JSON - Metadata - Scandata - Single Page Processed JP2 ZIP - Text PDF -

Related Links:

Online Marketplaces

Find Integrating MicroRNA Target Predictions For The Discovery Of Gene Regulatory Networks: A Semi-supervised Ensemble Learning Approach. at online marketplaces:


41DTIC AD1037515: Semi Supervised Learning Of Feature Hierarchies For Object Detection In A Video (Open Access)

By

We propose a novel approach to boost the performance of generic object detectors on videos by learning video-specific features using a deep neural network. The insight behind our proposed approach is that an object appearing in different frames of a video clip should share similar features, which can be learned to build better detectors. Unlike many supervised detector adaptation or detection-by-tracking methods, our method does not require any extra annotations or utilize temporal correspondence. We start with the high-confidence detections from a generic detector, then iteratively learn new video-specific features and refine the detection scores. In order to learn discriminative and compact features, we propose a new feature learning method using a deep neural network based on auto en-coders. It differs from the existing unsupervised feature learning methods in two ways: first it optimizes both discriminative and generative properties of the features simultaneously, which gives our features better discriminative ability, second, our learned features are more compact, while the unsupervised feature learning methods usually learn a redundant set of over-complete features. Extensive experimental results on person and horse detection show that significant performance improvement can be achieved with our proposed method.

“DTIC AD1037515: Semi Supervised Learning Of Feature Hierarchies For Object Detection In A Video (Open Access)” Metadata:

  • Title: ➤  DTIC AD1037515: Semi Supervised Learning Of Feature Hierarchies For Object Detection In A Video (Open Access)
  • Author: ➤  
  • Language: English

“DTIC AD1037515: Semi Supervised Learning Of Feature Hierarchies For Object Detection In A Video (Open Access)” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 17.58 Mbs, the file-s for this book were downloaded 52 times, the file-s went public at Wed Apr 01 2020.

Available formats:
Abbyy GZ - Archive BitTorrent - DjVuTXT - Djvu XML - Item Tile - Metadata - OCR Page Index - OCR Search Text - Page Numbers JSON - Scandata - Single Page Processed JP2 ZIP - Text PDF - chOCR - hOCR -

Related Links:

Online Marketplaces

Find DTIC AD1037515: Semi Supervised Learning Of Feature Hierarchies For Object Detection In A Video (Open Access) at online marketplaces:


42On The Effectiveness Of Laplacian Normalization For Graph Semi-supervised Learning

By

We propose a novel approach to boost the performance of generic object detectors on videos by learning video-specific features using a deep neural network. The insight behind our proposed approach is that an object appearing in different frames of a video clip should share similar features, which can be learned to build better detectors. Unlike many supervised detector adaptation or detection-by-tracking methods, our method does not require any extra annotations or utilize temporal correspondence. We start with the high-confidence detections from a generic detector, then iteratively learn new video-specific features and refine the detection scores. In order to learn discriminative and compact features, we propose a new feature learning method using a deep neural network based on auto en-coders. It differs from the existing unsupervised feature learning methods in two ways: first it optimizes both discriminative and generative properties of the features simultaneously, which gives our features better discriminative ability, second, our learned features are more compact, while the unsupervised feature learning methods usually learn a redundant set of over-complete features. Extensive experimental results on person and horse detection show that significant performance improvement can be achieved with our proposed method.

“On The Effectiveness Of Laplacian Normalization For Graph Semi-supervised Learning” Metadata:

  • Title: ➤  On The Effectiveness Of Laplacian Normalization For Graph Semi-supervised Learning
  • Authors:

Edition Identifiers:

Downloads Information:

The book is available for download in "data" format, the size of the file-s is: 0.02 Mbs, the file-s for this book were downloaded 19 times, the file-s went public at Tue Aug 11 2020.

Available formats:
Archive BitTorrent - BitTorrent - Metadata - Unknown -

Related Links:

Online Marketplaces

Find On The Effectiveness Of Laplacian Normalization For Graph Semi-supervised Learning at online marketplaces:


43Semi-Supervised Learning Using Greedy Max-Cut

By

We propose a novel approach to boost the performance of generic object detectors on videos by learning video-specific features using a deep neural network. The insight behind our proposed approach is that an object appearing in different frames of a video clip should share similar features, which can be learned to build better detectors. Unlike many supervised detector adaptation or detection-by-tracking methods, our method does not require any extra annotations or utilize temporal correspondence. We start with the high-confidence detections from a generic detector, then iteratively learn new video-specific features and refine the detection scores. In order to learn discriminative and compact features, we propose a new feature learning method using a deep neural network based on auto en-coders. It differs from the existing unsupervised feature learning methods in two ways: first it optimizes both discriminative and generative properties of the features simultaneously, which gives our features better discriminative ability, second, our learned features are more compact, while the unsupervised feature learning methods usually learn a redundant set of over-complete features. Extensive experimental results on person and horse detection show that significant performance improvement can be achieved with our proposed method.

“Semi-Supervised Learning Using Greedy Max-Cut” Metadata:

  • Title: ➤  Semi-Supervised Learning Using Greedy Max-Cut
  • Authors:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.02 Mbs, the file-s for this book were downloaded 23 times, the file-s went public at Tue Aug 11 2020.

Available formats:
Archive BitTorrent - BitTorrent - Metadata - Unknown -

Related Links:

Online Marketplaces

Find Semi-Supervised Learning Using Greedy Max-Cut at online marketplaces:


44Semi-supervised Eigenvectors For Large-scale Locally-biased Learning

By

In many applications, one has side information, e.g., labels that are provided in a semi-supervised manner, about a specific target region of a large data set, and one wants to perform machine learning and data analysis tasks "nearby" that prespecified target region. For example, one might be interested in the clustering structure of a data graph near a prespecified "seed set" of nodes, or one might be interested in finding partitions in an image that are near a prespecified "ground truth" set of pixels. Locally-biased problems of this sort are particularly challenging for popular eigenvector-based machine learning and data analysis tools. At root, the reason is that eigenvectors are inherently global quantities, thus limiting the applicability of eigenvector-based methods in situations where one is interested in very local properties of the data. In this paper, we address this issue by providing a methodology to construct semi-supervised eigenvectors of a graph Laplacian, and we illustrate how these locally-biased eigenvectors can be used to perform locally-biased machine learning. These semi-supervised eigenvectors capture successively-orthogonalized directions of maximum variance, conditioned on being well-correlated with an input seed set of nodes that is assumed to be provided in a semi-supervised manner. We show that these semi-supervised eigenvectors can be computed quickly as the solution to a system of linear equations; and we also describe several variants of our basic method that have improved scaling properties. We provide several empirical examples demonstrating how these semi-supervised eigenvectors can be used to perform locally-biased learning; and we discuss the relationship between our results and recent machine learning algorithms that use global eigenvectors of the graph Laplacian.

“Semi-supervised Eigenvectors For Large-scale Locally-biased Learning” Metadata:

  • Title: ➤  Semi-supervised Eigenvectors For Large-scale Locally-biased Learning
  • Authors:
  • Language: English

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 77.22 Mbs, the file-s for this book were downloaded 193 times, the file-s went public at Sat Jul 20 2013.

Available formats:
Abbyy GZ - Animated GIF - Archive BitTorrent - DjVu - DjVuTXT - Djvu XML - JPEG Thumb - Metadata - Scandata - Single Page Processed JP2 ZIP - Text PDF -

Related Links:

Online Marketplaces

Find Semi-supervised Eigenvectors For Large-scale Locally-biased Learning at online marketplaces:


45Semi-Supervised Nonlinear Distance Metric Learning Via Forests Of Max-Margin Cluster Hierarchies

By

Metric learning is a key problem for many data mining and machine learning applications, and has long been dominated by Mahalanobis methods. Recent advances in nonlinear metric learning have demonstrated the potential power of non-Mahalanobis distance functions, particularly tree-based functions. We propose a novel nonlinear metric learning method that uses an iterative, hierarchical variant of semi-supervised max-margin clustering to construct a forest of cluster hierarchies, where each individual hierarchy can be interpreted as a weak metric over the data. By introducing randomness during hierarchy training and combining the output of many of the resulting semi-random weak hierarchy metrics, we can obtain a powerful and robust nonlinear metric model. This method has two primary contributions: first, it is semi-supervised, incorporating information from both constrained and unconstrained points. Second, we take a relaxed approach to constraint satisfaction, allowing the method to satisfy different subsets of the constraints at different levels of the hierarchy rather than attempting to simultaneously satisfy all of them. This leads to a more robust learning algorithm. We compare our method to a number of state-of-the-art benchmarks on $k$-nearest neighbor classification, large-scale image retrieval and semi-supervised clustering problems, and find that our algorithm yields results comparable or superior to the state-of-the-art, and is significantly more robust to noise.

“Semi-Supervised Nonlinear Distance Metric Learning Via Forests Of Max-Margin Cluster Hierarchies” Metadata:

  • Title: ➤  Semi-Supervised Nonlinear Distance Metric Learning Via Forests Of Max-Margin Cluster Hierarchies
  • Authors:

“Semi-Supervised Nonlinear Distance Metric Learning Via Forests Of Max-Margin Cluster Hierarchies” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.70 Mbs, the file-s for this book were downloaded 21 times, the file-s went public at Sat Jun 30 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Semi-Supervised Nonlinear Distance Metric Learning Via Forests Of Max-Margin Cluster Hierarchies at online marketplaces:


46Semi-Supervised Learning With Context-Conditional Generative Adversarial Networks

By

We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss. Images with random patches removed are presented to a generator whose task is to fill in the hole, based on the surrounding pixels. The in-painted images are then presented to a discriminator network that judges if they are real (unaltered training images) or not. This task acts as a regularizer for standard supervised training of the discriminator. Using our approach we are able to directly train large VGG-style networks in a semi-supervised fashion. We evaluate on STL-10 and PASCAL datasets, where our approach obtains performance comparable or superior to existing methods.

“Semi-Supervised Learning With Context-Conditional Generative Adversarial Networks” Metadata:

  • Title: ➤  Semi-Supervised Learning With Context-Conditional Generative Adversarial Networks
  • Authors:

“Semi-Supervised Learning With Context-Conditional Generative Adversarial Networks” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 1.38 Mbs, the file-s for this book were downloaded 44 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Semi-Supervised Learning With Context-Conditional Generative Adversarial Networks at online marketplaces:


47Graph-based Semi-supervised Learning For Relational Networks

By

We address the problem of semi-supervised learning in relational networks, networks in which nodes are entities and links are the relationships or interactions between them. Typically this problem is confounded with the problem of graph-based semi-supervised learning (GSSL), because both problems represent the data as a graph and predict the missing class labels of nodes. However, not all graphs are created equally. In GSSL a graph is constructed, often from independent data, based on similarity. As such, edges tend to connect instances with the same class label. Relational networks, however, can be more heterogeneous and edges do not always indicate similarity. For instance, instead of links being more likely to connect nodes with the same class label, they may occur more frequently between nodes with different class labels (link-heterogeneity). Or nodes with the same class label do not necessarily have the same type of connectivity across the whole network (class-heterogeneity), e.g. in a network of sexual interactions we may observe links between opposite genders in some parts of the graph and links between the same genders in others. Performing classification in networks with different types of heterogeneity is a hard problem that is made harder still when we do not know a-priori the type or level of heterogeneity. Here we present two scalable approaches for graph-based semi-supervised learning for the more general case of relational networks. We demonstrate these approaches on synthetic and real-world networks that display different link patterns within and between classes. Compared to state-of-the-art approaches, ours give better classification performance without prior knowledge of how classes interact. In particular, our two-step label propagation algorithm gives consistently good accuracy and runs on networks of over 1.6 million nodes and 30 million edges in around 12 seconds.

“Graph-based Semi-supervised Learning For Relational Networks” Metadata:

  • Title: ➤  Graph-based Semi-supervised Learning For Relational Networks
  • Author:

“Graph-based Semi-supervised Learning For Relational Networks” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.88 Mbs, the file-s for this book were downloaded 25 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Graph-based Semi-supervised Learning For Relational Networks at online marketplaces:


48Semi-Supervised Representation Learning Based On Probabilistic Labeling

By

In this paper, we present a new algorithm for semi-supervised representation learning. In this algorithm, we first find a vector representation for the labels of the data points based on their local positions in the space. Then, we map the data to lower-dimensional space using a linear transformation such that the dependency between the transformed data and the assigned labels is maximized. In fact, we try to find a mapping that is as discriminative as possible. The approach will use Hilber-Schmidt Independence Criterion (HSIC) as the dependence measure. We also present a kernelized version of the algorithm, which allows non-linear transformations and provides more flexibility in finding the appropriate mapping. Use of unlabeled data for learning new representation is not always beneficial and there is no algorithm that can deterministically guarantee the improvement of the performance by exploiting unlabeled data. Therefore, we also propose a bound on the performance of the algorithm, which can be used to determine the effectiveness of using the unlabeled data in the algorithm. We demonstrate the ability of the algorithm in finding the transformation using both toy examples and real-world datasets.

“Semi-Supervised Representation Learning Based On Probabilistic Labeling” Metadata:

  • Title: ➤  Semi-Supervised Representation Learning Based On Probabilistic Labeling
  • Authors:

“Semi-Supervised Representation Learning Based On Probabilistic Labeling” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.50 Mbs, the file-s for this book were downloaded 17 times, the file-s went public at Fri Jun 29 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Semi-Supervised Representation Learning Based On Probabilistic Labeling at online marketplaces:


49Morpho-syntactic Lexicon Generation Using Graph-based Semi-supervised Learning

By

Morpho-syntactic lexicons provide information about the morphological and syntactic roles of words in a language. Such lexicons are not available for all languages and even when available, their coverage can be limited. We present a graph-based semi-supervised learning method that uses the morphological, syntactic and semantic relations between words to automatically construct wide coverage lexicons from small seed sets. Our method is language-independent, and we show that we can expand a 1000 word seed lexicon to more than 100 times its size with high quality for 11 languages. In addition, the automatically created lexicons provide features that improve performance in two downstream tasks: morphological tagging and dependency parsing.

“Morpho-syntactic Lexicon Generation Using Graph-based Semi-supervised Learning” Metadata:

  • Title: ➤  Morpho-syntactic Lexicon Generation Using Graph-based Semi-supervised Learning
  • Authors:

“Morpho-syntactic Lexicon Generation Using Graph-based Semi-supervised Learning” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.48 Mbs, the file-s for this book were downloaded 21 times, the file-s went public at Thu Jun 28 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Morpho-syntactic Lexicon Generation Using Graph-based Semi-supervised Learning at online marketplaces:


50Low-rank Label Propagation For Semi-supervised Learning With 100 Millions Samples

By

The success of semi-supervised learning crucially relies on the scalability to a huge amount of unlabelled data that are needed to capture the underlying manifold structure for better classification. Since computing the pairwise similarity between the training data is prohibitively expensive in most kinds of input data, currently, there is no general ready-to-use semi-supervised learning method/tool available for learning with tens of millions or more data points. In this paper, we adopted the idea of two low-rank label propagation algorithms, GLNP (Global Linear Neighborhood Propagation) and Kernel Nystr\"om Approximation, and implemented the parallelized version of the two algorithms accelerated with Nesterov's accelerated projected gradient descent for Big-data Label Propagation (BigLP). The parallel algorithms are tested on five real datasets ranging from 7000 to 10,000,000 in size and a simulation dataset of 100,000,000 samples. In the experiments, the implementation can scale up to datasets with 100,000,000 samples and hundreds of features and the algorithms also significantly improved the prediction accuracy when only a very small percentage of the data is labeled. The results demonstrate that the BigLP implementation is highly scalable to big data and effective in utilizing the unlabeled data for semi-supervised learning.

“Low-rank Label Propagation For Semi-supervised Learning With 100 Millions Samples” Metadata:

  • Title: ➤  Low-rank Label Propagation For Semi-supervised Learning With 100 Millions Samples
  • Authors:

“Low-rank Label Propagation For Semi-supervised Learning With 100 Millions Samples” Subjects and Themes:

Edition Identifiers:

Downloads Information:

The book is available for download in "texts" format, the size of the file-s is: 0.54 Mbs, the file-s for this book were downloaded 18 times, the file-s went public at Sat Jun 30 2018.

Available formats:
Archive BitTorrent - Metadata - Text PDF -

Related Links:

Online Marketplaces

Find Low-rank Label Propagation For Semi-supervised Learning With 100 Millions Samples at online marketplaces:


Source: The Open Library

The Open Library Search Results

Available books for downloads and borrow from The Open Library

1Semi-supervised learning

By

Book's cover

“Semi-supervised learning” Metadata:

  • Title: Semi-supervised learning
  • Authors:
  • Language: English
  • Number of Pages: Median: 508
  • Publisher: MIT Press
  • Publish Date:
  • Publish Location: Cambridge, Mass

“Semi-supervised learning” Subjects and Themes:

Edition Identifiers:

Access and General Info:

  • First Year Published: 2010
  • Is Full Text Available: Yes
  • Is The Book Public: No
  • Access Status: Borrowable

Online Access

Downloads Are Not Available:

The book is not public therefore the download links will not allow the download of the entire book, however, borrowing the book online is available.

Online Borrowing:

Online Marketplaces

Find Semi-supervised learning at online marketplaces:


Buy “Semi Supervised Learning” online:

Shop for “Semi Supervised Learning” on popular online marketplaces.