搜索资源列表
gradient_descent-
- 梯度下降算法Python实现,用于logistic regression的训练问题。-Python implementation gradient descent algorithm for training logistic regression problems.
SimulatedAnnealing
- Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space. It is often used when the search space is
LRM---GDM-[Matlab-1.0]
- This is a very simple demo for Vanilla Linear Regression Model (LRM) optimized via Vanilla Gradient Descent Method (GDM) on 1-D and 2-D data set. NOTE that the multicollinearity case is unexpectedly shown. A More advanced version is expected in the c
CV_Image_segmentation
- CV模型是一种重要的图像分割模型,本代码针对其收敛速度慢、效率低的缺点,提出一种求解CV模型的新方法。将CV模型的能量泛函改写为改写为与原来有稳定解的总变分公式形式,然后使用对偶公式法求总变分公式的极小值,再在其中引入一速度向以加快模型的收敛速度。新方法克服了梯度下降法,要求时间步长小,迭代次数多的缺点。-CV model is a kind of important image segmentation model, the code for its slow convergence spee
gradent_desent
- 梯度下降算法, 可以直接在电脑上解压运行,保证正确-Gradient descent algorithm can extract run directly on the computer, to ensure the correct
nmfsc
- 基于梯度下降的稀疏约束 NMF 算法(NMFsc),之前pudn上也有类似的,但缺少一个程序,这里面都包括了 -Based on gradient descent sparse constraint NMF algorithm (NMFsc), before the pudn has a similar, but the lack of a program, there are included
bit-for-ldpc
- A novel class of bit-flipping (BF) algorithms for decoding low-density parity-check (LDPC) codes is presented. The proposed algorithms, which are called gradient descent bit flipping (GDBF) algorithms, can be regarded as simplified gradient d
BPNN
- 使用梯度下降的bp神经网络实现iris数据的分类的java程序,适合初学者-BP neural network using gradient descent to achieve iris data classification of the Java program, suitable for beginners
mlclass-ex5
- 机器学习相关内容,基于matlab仿真,实现了线性回归分类器的功能-Files included in this exercise ex1.m- Octave scr ipt that will help step you through the exercise ex1 multi.m- Octave scr ipt for the later parts of the exercise ex1data1.txt- Dataset for linear regression wit
mlclass-ex1
- 线性感知机,机器学习相关,基于matlab仿真-Files included in this exercise ex1.m- Octave scr ipt that will help step you through the exercise ex1 multi.m- Octave scr ipt for the later parts of the exercise ex1data1.txt- Dataset for linear regression with one vari
ClosedFormVsGradientDescent
- Closed Form Linear Regression Vs Gradient Descent in Machine Learning
FNN
- 模糊神经网络,梯度下降算法。能以任意精度逼近任意连续函数-Neural network, gradient descent algorithm. Arbitrary precision can approximate any continuous function
GradientDescent
- GradientDescent梯度下降法,以后会添加随机梯度下降法,正在学习中-Gradient descent, will be added later stochastic gradient descent, I am learning
1
- 梯度下降学习代码,简单明了,一看就会-Gradient descent learning code, simple and clear
EMKmeans
- 在线KMeans算法不同于一般的聚类,其主要算法为梯度下降法-Online KMeans Unlike clustering algorithm, the main gradient descent algorithm
nonlinear-optimization
- 非线性规划各种算法汇总,包括线搜索、梯度下降法、牛顿法、共轭梯度法、DFP算法、BFGS算法和信赖域算法-Summary all kinds of algorithm of nonlinear programming, including line search, the gradient descent method, Newton method and conjugate gradient method, DFP algorithm and BFGS algorithm and trust
Perceptron
- 本实验的目的是学习和掌握两种感知器算法:批处理感知器算法和批处理裕量松弛 算法。感知器算法是通过学习两类已标记的样本,建立一个线性分类器。学习的过程就是求解感知器权系数的过程,人们通过建立一个准则函数J(a),将求解感知器权系数的问题简化为一个标量函数J(a)的极小化问题,即当a为解向量时,J(a)最小。而极小化问题常用梯度下降法来解决。本实验给出了基于梯度下降法的两种感知器算法,介绍了原理并编程实现,最后对两种算法的特点加以比较分析。-The purpose of this study i
rnn-from-scratch-master
- RNN神经网络的应用和概念,RNN源代码和使用方法-You can find that the parameters `(W, U, V)` are shared in different time steps. And the output in each time step can be**softmax**. So you can use**cross entropy** loss as an error function and use some optimizing method (e
gradientDescentMulti.m
- Gradient descent for multivariable data
Gradient_descent
- matlab 算法实现,用最速下降法求解对称正定矩阵方程组。-Solve the symmetric positive definite matrix equations by Gradient Descent in Matlab.