搜索资源列表
knn_java
- java写的k最邻近算法,是数据挖掘的基本算法之一。-Java write the k nearest neighbor algorithm, is one of the basic algorithm of data mining.
dataset
- 自己写的python canny算法,可以对图片进行边缘检测,要求见PPT,可根据需要调整代码-Write your own python canny algorithm that can detect the edge of the picture, asked to see the PPT, the code can be adjusted as needed
DataMiningApriori
- apriori的java实现,写的有点长,不能输出关联规则。经测试,可用。-apriori achieve the java, write a bit long, you can not output association rules. Tested and available.
FCM-(2)
- 本人自己写的模糊均值算法,采用c#语言,经过测试可以运行。-I write my own fuzzy average algorithm, using c# language, through the test can be run.
CF
- 这是用matlab写的协同滤波算法主程序,程序简单,易于理解。可以应用于推荐系统-It is used to write collaborative filtering algorithm matlab main program, the program is simple and easy to understand. Recommended system can be applied。。。。。。
kNN
- 使用python编写kNN算法,包括生成数据集,简单分类器,文本转换等简单算法。-Using python write kNN algorithms, including generating a data set, a simple classification, text conversion simple algorithm.
Bayes
- 本程序是使用的Python写的一个Bayes分类器,通过这个程序可以大致掌握Bayes的原理。-This procedure is used to write a Python Bayes classifier, through this program can be broadly master the principles of Bayes.
DecisionTree
- 本程序是利用python写的一个决策树算法,通过该例子可以实现简单的决策树处理,也可以学习决策树算法的基本思想。-This procedure is to use python to write a decision tree algorithm, this example can be achieved by a simple decision tree processing, you can also learn the basic idea of the decision tree alg
selenium_sina_text
- python 写的爬虫 可以爬取新浪微博wap端的内容,包括用户发表的微博内容,时间,终端,评论数,转发数等指标,直接可用-write python reptile You can crawl content Weibo wap side, including micro-blog content published by users, time, terminal, Comments, forwarding numbers and other indicators, directly
emailSpam.tar
- 垃圾邮件识别的简单代码,使用python编写,含有测试用例-Simple code spam detection using python write, containing test cases
Maltab
- 文件里面是数据挖掘中各种经典算法的MATLAB的源代码,尤其适合不只懂原理不会写代码的人进行数据建模- The document is a variety of data mining algorithms in the classic MATLAB source code, especially for people who do not understand the principle of not only the code to write data modeling
find
- 从excel中读取所需文档的数据,并写进相对位置,对名称进行文件路径字符串拼接-From the excel to read the required document data, and write the relative position, the name of the file path string stitching
Crawler.tar
- 利用了python3.5编写了一个爬虫,爬取豆瓣上电影《声之形》的评论,并统计评论词的频率,制作了词云(Using python3.5 to write a crawler, climb the comments on the movie "sound shape", and statistics the frequency of the comment word, making the word cloud)
