搜索资源列表
step3-java-version-test.png.tar
- step 3 install hadoop
wordcount
- hadoop下基于mapreduce的wordcount程序-the wordcount procedures hadoop based on MapReduce
difeyeframework_V1.1.4
- Difeye一款轻量级的PHP开发框架,具有以下特点: 数据库连接做自动主从读写分离配置,适合单机和分布式站点部署; 支持Smarty模板机制,可灵活配置第三方缓存组件; 完全分离页面和动作,仿C#页面加载自动执行Page_Load入口函数; 支持mysql,mongodb等第三方数据库模块,支持读写分离,分布式部署; 可灵活配置session cookie存储方式等。 Difeye-敏捷的轻量级PHP框架更新: 增加后台管理
mahout-distribution-0.7-src
- 史上最新 最完整的 hadoop java 的图像聚类算法 。mahout-The history of the latest and most complete Hadoop java image clustering algorithm. mahout
HadoopFSOperations
- 1. 概述 2. 文件操作 2.1 上传本地文件到hadoop fs 2.2 在hadoop fs中新建文件,并写入 2.3 删除hadoop fs上的文件 2.4 读取文件 3. 目录操作 3.1 在hadoop fs上创建目录 3.2 删除目录 3.3 读取某个目录下的所有文件 4. 参考资料接代码下载 -hadoop caozuo download delete
HadoopInActionSourcecode
- 《Hadoop实战》这本书的源代码集,对新手入门挺有帮助的-the example source code of hadoop in action
rpc
- hadoop 的rpc机制用python进行实现-Google 翻译
MapReduce
- Hadoop MapReduce 实现倒排索引-Hadoop MapReduce
HadoopIcansee)
- Hadoop权威指南(原版)一本非常不错的Hadoop进阶书籍,值得大家来学习-failed to translate
Test_1
- 计算样例程序,hadoop测试工具,初学入门教程,仅供参考-Sample Calculation program hadoop testing tools, beginner introductory tutorial for reference purposes only
src_delta_simrank.tar
- delta simrank源代码,hadoop版本的代码-delta simRank code, base on hadoop
InvertedIndex
- 基于hadoop的倒排索引的编程,Linux hadoop 倒排索引-Hadoop inverted index based programming, Linux hadoop inverted index
InvertedIndex
- Hadoop中实现倒排索引,实现最简单行的倒排索引,并能够显示单词出现的文档路径-Hadoop achieve inverted index, achieve the most simple lines inverted index, and be able to display the word document path appears
MapReduce-WordCount
- Hadoop下的一个单词统计程序源代码。-Hadoop word count under a program source code.
1-s2.0-S1005888510601409-main
- study paper on hadoop
Untitled-Document1
- research paper on hadoop cloud computing
EasyHadoop
- easyhadoop的相关安装指南,详细介绍了如何通过easyhadoop搭建hadoop集群-easyhadoop relevant installation guide, detailing how to build a hadoop cluster easyhadoop
kmeans
- Hadoop的k-means算法,分布式K-means-Hadoop-k-means algorithm, distributed K-means
cookbook
- 一个进行食谱推荐的mapreduce实现,需要打包(jar)在hadoop集群环境执行。-A recipe recommended for mapreduce implementation, need to be packaged (jar) performed in hadoop cluster environment.
WordCount2
- hadoop m/r word_count