为什么ai导jpg的微信jpg用手机看有亮边

全球汽车生活
海岛奇兵BoomBeach
手游独立日
全球汽车生活
海岛奇兵BoomBeach
手游独立日&【周末小剧场】没想到你是这样的小AI同学
【周末小剧场】没想到你是这样的小AI同学
日09时24分来源:
趁着人类不注意,“小AI同学”竟然说出了这番话?
你见过悬浮的手机开箱吗?
不多说了,抓紧看好玩的创意视频吧!
想亲自感受第3个小视频中的场景吗?
有空就来小米 · 今日未来馆 ,“无穷之屋”五维世界等你来探索。
开馆时间:周五|六:10:00-21:00 周日|二|三|四:10:00-18:00
友情地址:今日美术馆 1号馆 北京市朝阳区百子湾路32号
周末推送,加个鸡腿微信官方AI体验中心上线,快来看看你的颜值得多少分?微信官方AI体验中心上线,快来看看你的颜值得多少分?小橙序之家百家号大家都听过 AI 吧?没错,就是近年来最火爆的人工智能(Artificial Intelligence,简称 AI),什么是人工智能呢?人工智能是一门综合科学,包含有数学、神经科学、计算机科学、逻辑学、归纳学、统计学、控制学、工程学、哲学、心理学、生物学、认知科学、仿生学、经济学、语言学等多门学科的精华,是一门尖端的综合性学科!而我们常说的人工智能是指我们利用现有的科技水平基础,创造出来用于为人类提供方便、体现人类高水平智慧的一个科技产物,它并非我们所理解的「 智能 」。换句话说,人工智能就是一段段代码、一个个算法、一条条公式,一堆堆数据而已,它仅仅是一个具有条件逻辑判断的产品,并没有自我意识,它的自我学习功能也仅仅是基于学习算法去实现的,如 AlphaGo。简单的了解什么是人工智能之后,我们回顾一下,在过去的几年时间里,除了《终结者》电影里的人工智能,我们还知道哪些人工智能产品呢?谷歌的阿尔法狗(AlphaGo)、微软的小冰、各种自动驾驶汽车……好像就没了呢其实,我们很早很早之前就已经接触过人工智能,比如图文识别、图像识别等,都是人工智能的一些应用。今天,橙子菌给大家介绍一款腾讯官方出品的小程序,叫「AI体验中心」,如果你对人工智能还是云里雾里的话,那么就来玩一玩这款小程序吧!AI体验中心「AI体验中心」小程序主要分为计算机视觉、自然语言处理、智能语音三大板块,每个板块下面又细分了多个功能模块,比如说计算机视觉板块下又分为OCR(文字识别)、人脸识别、图片特效、图片识别;自然语言处理板块下也分为基本文本分析、语义解析、情感分析、机器翻译等多个模块……下面,橙子菌挑选几个实用或好玩的模块给大家分享一下吧!OCR 模块OCR,文字识别,一个超实用的功能。简单说就是一张图片里有什么文字,只要上传到小程序里就能识别出来;比如说你可以拍下身份证、行驶证、营业执照、银行卡、驾驶证、名片等各种图片、甚至是PPT页面、街头小广告,只要你上传到了小程序,它就能将文字识别出来。如上图,上传身份证的照片,即可识别出姓名、性别、民族、生日、住址等信息;上传名片的照片,即能识别出姓名、职位、公司、地址、邮箱、电话等信息。想一下,当别人递给你一张纸质名片,你只需拍下来上传小程序即可识别上面的信息,而且小程序还将名片上的信息按照姓名、职位、公司等分类整理好,酷不酷?如果照片不模糊的话,基本都能识别。人脸识别模块人脸识别主要有人脸对比、人脸分析、五官定位和颜龄检测,你可以上传自己美美哒自拍照进行人脸分析,提醒一下,它能分析出你的性别、年龄、面部表情、魅力值等信息,你的颜值够不够试一下就知道了!敢不敢留言晒一个?除了给你的脸打分之外,这款小程序还可以进行人脸对比,比如说你和吴亦凡长得挺像,或者和范冰冰长地几乎一毛一样的,可以来对比试试,如果你们的相似度超过 10% 的话,橙子菌就直播吃……那啥。图片特效这个功能不错,是真的。首先,请看本模块的杀手锏功能 —— 人脸融合。据说女生用了之后,会被自己帅晕,男生(吴亦凡除外)使用后,可能会被自己丑哭这个功能男同学还是不要玩了,以免心塞。女同学玩玩就可以了,你看,简直是要被自己帅晕了。不知道大家还记不记得,前段时间刷爆朋友圈的军装照不?现在你用人脸融合也能实现了,虽然有点鬼畜;但除了男女军装照之外,还有白浅、花无缺、凤九、甚至王者荣耀里的李白等人物都可以融合,可以玩很久了!讲真,橙子菌觉得这个模块吧,简直是美图秀秀的天敌 —— 毁图秀秀,简直是不毁图誓不罢休!除了人脸融合功能外,还有滤镜、人脸美妆、人脸变妆、大头贴等,这些个功能还算不错的,如果你平时对图片处理要求不高的话,用这个小程序就够了!图片识别在这个功能模块里,其实也没啥好说的,什么图片标签识别、美食图片识别、模糊图片识别、场景物体识别的都没什么好玩的,也就那样吧,不好玩,也用不少。所以,还是来说说智能鉴黄吧,来来来,你先上传几张图看看,涉黄指数是多少!还有一个比较水的功能,叫暴恐识别;紧急情况下,谁还拍个照再上传确认一下?等确认完了估计也烧成炭了吧!如果这个功能在识别完成、结果确认之后,能自动报警,那就太好了,不然真的就是鸡肋!机器翻译这个功能在自然语言处理板块中,基本上也没有什么惊喜,唯一具有实用性的功能,应该是机器翻译了。在机器翻译中,除了能中英互译之后,还支持日语、越南语、德语、西班牙语等,虽然算是比较实用,但如果需要中英翻译,直接在微信搜索栏搜索就行,如果是需要其他语言翻译,那还是挺不错的,中英翻译估计真的用不上了!好了,比较实用的功能都已经介绍完毕,智能语音板块也没有可玩的功能了;说一说这个小程序最大的坑吧,使用 OCR(文字识别)识别出来的文字没办法保存,也不能复制......还有,人脸融合或者加了滤镜之后的图片也没法长按保存,想发票圈的话只能默默截图.......所以,这款小程序我们体验一下就好你们觉得呢?如果你想体验这款小程序,就在微信搜索输入「AI体验中心」,即可体验该小程序,同时也欢迎你体验更多小程序,然后将写体验文章投稿给我!本文仅代表作者观点,不代表百度立场。系作者授权百家号发表,未经许可不得转载。小橙序之家百家号最近更新:简介:小橙序之家是小程序开发服务平台作者最新文章相关文章微信,识图取字练朗读,人工智能太厉害!_腾讯视频
三倍流畅播放
1080P蓝光画质
新剧提前看
1080P蓝光画质
纯净式无框播放器
三倍流畅播放
扫一扫 手机继续看
下载需先安装客户端
{clientText}
客户端特权:
3倍流畅播放
当前播放至 {time}
扫一扫 手机继续看
微信,识图取字练朗读,人工智能太厉害!
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要
副标题要不要&figure&&img src=&https://pic1.zhimg.com/v2-ba0cd1b4af580cf69e87a1_b.jpg& data-rawwidth=&1108& data-rawheight=&518& class=&origin_image zh-lightbox-thumb& width=&1108& data-original=&https://pic1.zhimg.com/v2-ba0cd1b4af580cf69e87a1_r.jpg&&&/figure&&p&
上篇文章讲了&u&&a href=&https://link.zhihu.com/?target=http%3A//www.cnblogs.com/charlotte77/p/7759802.html& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&卷积神经网络的基本知识&/a&&/u&,本来这篇文章准备继续深入讲CNN的相关知识和手写CNN,但是有很多同学跟我发邮件或私信问我关于PaddlePaddle如何读取数据、做数据预处理相关的内容。网上看的很多教程都是几个常见的例子,数据集不需要自己准备,所以不需要关心,但是实际做项目的时候做数据预处理感觉一头雾水,所以我就写一篇文章汇总一下,讲讲如何用PaddlePaddle做数据预处理。(本文同步发布与博客园:&a href=&https://link.zhihu.com/?target=http%3A//www.cnblogs.com/charlotte77/p/7802226.html& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&【深度学习系列】PaddlePaddle之数据预处理 - Charlotte77 - 博客园&/a&)&/p&&hr&&h2&&b&PaddlePaddle的基本数据格式&/b&&/h2&&p&  根据&u&&a href=&https://link.zhihu.com/?target=http%3A//doc.paddlepaddle.org/doc_cn/api/v2/data.html& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&官网&/a&&/u&的资料,总结出PaddlePaddle支持多种不同的数据格式,包括四种数据类型和三种序列格式:&/p&&p&&b&四种数据类型:&/b&&/p&&ul&&li&dense_vector:稠密的浮点数向量。&/li&&li&sparse_binary_vector:稀疏的二值向量,即大部分值为0,但有值的地方必须为1。&/li&&li&sparse_float_vector:稀疏的向量,即大部分值为0,但有值的部分可以是任何浮点数。&/li&&li&integer:整型格式&/li&&/ul&&p&api如下:&/p&&ul&&li&&code&paddle.v2.data_type.dense_vector&/code&(&i&dim&/i&, &i&seq_type=0&/i&)&/li&&ul&&li&说明:稠密向量,输入特征是一个稠密的浮点向量。举个例子,手写数字识别里的输入图片是28*28的像素,Paddle的神经网络的输入应该是一个784维的稠密向量。&/li&&li&参数:&/li&&ul&&li&dim(int) 向量维度&/li&&li&seq_type(int)输入的序列格式&/li&&/ul&&li&返回类型:InputType&/li&&/ul&&li&&code&paddle.v2.data_type.sparse_binary_vector&/code&(&i&dim&/i&, &i&seq_type=0&/i&)&/li&&ul&&li&说明:稀疏的二值向量。输入特征是一个稀疏向量,这个向量的每个元素要么是0,要么是1&/li&&li&参数:同上&/li&&li&返回类型:同上&/li&&/ul&&li&&code&paddle.v2.data_type.sparse_vector&/code&(&i&dim&/i&, &i&seq_type=0&/i&)&/li&&ul&&li&说明:稀疏向量,向量里大多数元素是0,其他的值可以是任意的浮点值&/li&&li&参数:同上&/li&&li&返回类型:同上&/li&&/ul&&li&&code&paddle.v2.data_type.integer_value&/code&(&i&value_range&/i&, &i&seq_type=0&/i&)&/li&&ul&&li&说明:整型格式&/li&&li&参数:  &/li&&ul&&li&seq_type(int):输入的序列格式&/li&&li&value_range(int):每个元素的范围&/li&&li&返回类型:InputType&/li&&/ul&&/ul&&/ul&&p&&br&&/p&&p&&b&三种序列格式:&/b&&/p&&ul&&li&SequenceType.NO_SEQUENCE:不是一条序列&/li&&li&SequenceType.SEQUENCE:是一条时间序列&/li&&li&SequenceType.SUB_SEQUENCE: 是一条时间序列,且序列的每一个元素还是一个时间序列。&/li&&/ul&&p&api如下:&/p&&ul&&li&&code&paddle.v2.data_type.dense_vector_sequence&/code&(&i&dim&/i&, &i&seq_type=0&/i&)&/li&&ul&&li&说明:稠密向量的序列格式&/li&&li&参数:dim(int):稠密向量的维度&/li&&li&返回类型:InputType&/li&&/ul&&li&&code&paddle.v2.data_type.sparse_binary_vector_sequence&/code&(&i&dim&/i&, &i&seq_type=0&/i&)&/li&&ul&&li&说明:稀疏的二值向量序列。每个序列里的元素要么是0要么是1&/li&&li&参数:dim(int):稀疏向量的维度&/li&&li&返回类型:InputType&/li&&/ul&&li&&code&paddle.v2.data_type.sparse_non_value_slot&/code&(&i&dim&/i&, &i&seq_type=0&/i&)&/li&&ul&&li&说明:稀疏的向量序列。每个序列里的元素要么是0要么是1&/li&&li&参数:&/li&&ul&&li&dim(int):稀疏向量的维度&/li&&li&seq_type(int):输入的序列格式&/li&&/ul&&li&返回类型:InputType&/li&&/ul&&li&&code&paddle.v2.data_type.sparse_value_slot&/code&(&i&dim&/i&, &i&seq_type=0&/i&)&/li&&ul&&li&说明:稀疏的向量序列,向量里大多数元素是0,其他的值可以是任意的浮点值&/li&&li&参数:&/li&&ul&&li&dim(int):稀疏向量的维度&/li&&li&seq_type(int):输入的序列格式&/li&&/ul&&li&返回类型:InputType&/li&&/ul&&li&&code&paddle.v2.data_type.integer_value_sequence&/code&(&i&value_range&/i&, &i&seq_type=0&/i&)&/li&&ul&&li&说明:value_range(int):每个元素的范围&/li&&/ul&&/ul&&p&&br&&/p&&p&  不同的数据类型和序列模式返回的格式不同,如下表:&/p&&figure&&img src=&https://pic1.zhimg.com/v2-cfde751cbb6a2fa_b.jpg& data-caption=&& data-rawwidth=&1750& data-rawheight=&448& class=&origin_image zh-lightbox-thumb& width=&1750& data-original=&https://pic1.zhimg.com/v2-cfde751cbb6a2fa_r.jpg&&&/figure&&p&&br&&/p&&p&  其中f表示浮点数,i表示整数&/p&&p&&br&&/p&&p&注意:对sparse_binary_vector和sparse_float_vector,PaddlePaddle存的是有值位置的索引。例如,&/p&&ul&&li&对一个5维非序列的稀疏01向量 &code&[0, 1, 1, 0, 0]&/code& ,类型是sparse_binary_vector,返回的是 &code&[1, 2]&/code& 。(因为只有第1位和第2位有值)&/li&&li&对一个5维非序列的稀疏浮点向量 &code&[0, 0.5, 0.7, 0, 0]&/code& ,类型是sparse_float_vector,返回的是 &code&[(1, 0.5), (2, 0.7)]&/code& 。(因为只有第一位和第二位有值,分别是0.5和0.7)&/li&&/ul&&hr&&h2&&b&PaddlePaddle的数据读取方式&/b&&/h2&&p&  我们了解了上文的四种基本数据格式和三种序列模式后,在处理自己的数据时可以根据需求选择,但是处理完数据后如何把数据放到模型里去训练呢?我们知道,基本的方法一般有两种:&/p&&ul&&li&一次性加载到内存:模型训练时直接从内存中取数据,不需要大量的IO消耗,速度快,适合少量数据。&/li&&li&加载到磁盘/HDFS/共享存储等:这样不用占用内存空间,在处理大量数据时一般采取这种方式,但是缺点是每次数据加载进来也是一次IO的开销,非常影响速度。&/li&&/ul&&p&&br&&/p&&p&  在PaddlePaddle中我们可以有三种模式来读取数据:分别是reader、reader creator和reader decorator,这三者有什么区别呢?&/p&&ul&&li&reader:从本地、网络、分布式文件系统HDFS等读取数据,也可随机生成数据,并返回一个或多个数据项。&/li&&li&reader creator:一个返回reader的函数。&/li&&li&reader decorator:装饰器,可组合一个或多个reader。&/li&&/ul&&p&&br&&/p&&p&&b&reader&/b&&/p&&p&  我们先以reader为例,为房价数据(斯坦福吴恩达的公开课第一课举例的数据)创建一个reader:&/p&&ol&&li&创建一个reader,实质上是一个迭代器,每次返回一条数据(此处以房价数据为例)&/li&&/ol&&div class=&highlight&&&pre&&code class=&language-console&&&span&&/span&&span class=&go&&reader = paddle.dataset.uci_housing.train()&/span&
&/code&&/pre&&/div&&p&  2. 创建一个shuffle_reader,把上一步的reader放进去,配置buf_size就可以读取buf_size大小的数据自动做shuffle,让数据打乱,随机化&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&shuffle_reader = paddle.reader.shuffle(reader,buf_size= 100)
&/code&&/pre&&/div&&p&  3.创建一个batch_reader,把上一步混洗好的shuffle_reader放进去,给定batch_size,即可创建。&/p&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&batch_reader = paddle.batch(shuffle_reader,batch_size = 2)
&/code&&/pre&&/div&&p&  这三种方式也可以组合起来放一块:&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&reader&/span& &span class=&o&&=&/span& &span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&batch&/span&&span class=&p&&(&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&reader&/span&&span class=&o&&.&/span&&span class=&n&&shuffle&/span&&span class=&p&&(&/span&
&span class=&n&&uci_housing&/span&&span class=&o&&.&/span&&span class=&n&&train&/span&&span class=&p&&(),&/span&
&span class=&n&&buf_size&/span& &span class=&o&&=&/span& &span class=&mi&&100&/span&&span class=&p&&),&/span&
&span class=&n&&batch_size&/span&&span class=&o&&=&/span&&span class=&mi&&2&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&p&可以以一个直观的图来表示:&/p&&p&&br&&/p&&figure&&img src=&https://pic1.zhimg.com/v2-ba0cd1b4af580cf69e87a1_b.jpg& data-caption=&& data-rawwidth=&1108& data-rawheight=&518& class=&origin_image zh-lightbox-thumb& width=&1108& data-original=&https://pic1.zhimg.com/v2-ba0cd1b4af580cf69e87a1_r.jpg&&&/figure&&p&&br&&/p&&p&
从图中可以看到,我们可以直接从原始数据集里拿去数据,用reader读取,一条条灌倒shuffle_reader里,在本地随机化,把数据打乱,做shuffle,然后把shuffle后的数据,一个batch一个batch的形式,批量的放到训练器里去进行每一步的迭代和训练。 流程简单,而且只需要使用一行代码即可实现整个过程。 &/p&&p&&br&&/p&&p&&b&reader creator&/b&&/p&&p&如果想要生成一个简单的随机数据,以reader creator为例:&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&k&&def&/span& &span class=&nf&&reader_creator&/span&&span class=&p&&():&/span&
&span class=&k&&def&/span& &span class=&nf&&reader&/span&&span class=&p&&():&/span&
&span class=&k&&while&/span& &span class=&bp&&True&/span&&span class=&p&&:&/span&
&span class=&k&&yield&/span& &span class=&n&&numpy&/span&&span class=&o&&.&/span&&span class=&n&&random&/span&&span class=&o&&.&/span&&span class=&n&&uniform&/span&&span class=&p&&(&/span&&span class=&o&&-&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span&&span class=&n&&size&/span&&span class=&o&&=&/span&&span class=&mi&&784&/span&&span class=&p&&)&/span&
&span class=&k&&return&/span& &span class=&n&&reader&/span&
&/code&&/pre&&/div&&p&  源码见&u&&a href=&https://link.zhihu.com/?target=https%3A//github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/reader/creator.py& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&creator.py&/a&&/u&, 支持四种格式:np_array,text_file,RecordIO和cloud_reader&/p&&p&&br&&/p&&p&&b&reader decorator&/b&&/p&&p&
如果想要读取同时读取两部分的数据,那么可以定义两个reader,合并后对其进行shuffle。如我想读取所有用户对比车系的数据和浏览车系的数据,可以定义两个reader,分别为contrast()和view(),然后通过预定义的reader decorator缓存并组合这些数据,在对合并后的数据进行乱序操作。源码见&u&&a href=&https://link.zhihu.com/?target=https%3A//github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/reader/decorator.py& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&decorator.py&/a&&/u&&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&data&/span& &span class=&o&&=&/span& &span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&reader&/span&&span class=&o&&.&/span&&span class=&n&&shuffle&/span&&span class=&p&&(&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&reader&/span&&span class=&o&&.&/span&&span class=&n&&compose&/span&&span class=&p&&(&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&reader&/span&&span class=&p&&(&/span&&span class=&n&&contradt&/span&&span class=&p&&(&/span&&span class=&n&&contrast_path&/span&&span class=&p&&),&/span&&span class=&n&&buf_size&/span& &span class=&o&&=&/span& &span class=&mi&&100&/span&&span class=&p&&),&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&reader&/span&&span class=&p&&(&/span&&span class=&n&&view&/span&&span class=&p&&(&/span&&span class=&n&&view_path&/span&&span class=&p&&),&/span&&span class=&n&&buf_size&/span& &span class=&o&&=&/span& &span class=&mi&&200&/span&&span class=&p&&),&/span&
&span class=&mi&&500&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&p&&br&&/p&&p&
这样有一个很大的好处,就是组合特征来训练变得更容易了!传统的跑模型的方法是,确定label和feature,尽可能多的找合适的feature扔到模型里去训练,这样我们就需要做一张大表,训练完后我们可以分析某些特征的重要性然后重新增加或减少一些feature来进行训练,这样我们有需要对原来的label-feature表进行修改,如果数据量小没啥影响,就是麻烦点,但是数据量大的话需要每一次增加feature,和主键、label来join的操作都会很耗时,如果采取这种方式的话,我们可以对某些同一类的特征做成一张表,数据存放的地址存为一个变量名,每次跑模型的时候想选取几类特征,就创建几个reader,用reader decorator 组合起来,最后再shuffle灌倒模型里去训练。这!样!是!不!是!很!方!便!&/p&&p&
如果没理解,我举一个实例,假设我们要预测用户是否会买车,label是买车 or 不买车,feature有浏览车系、对比车系、关注车系的功能偏好等等20个,传统的思维是做成这样一张表:&/p&&figure&&img src=&https://pic2.zhimg.com/v2-bff81fe9ce_b.jpg& data-caption=&& data-rawwidth=&1472& data-rawheight=&348& class=&origin_image zh-lightbox-thumb& width=&1472& data-original=&https://pic2.zhimg.com/v2-bff81fe9ce_r.jpg&&&/figure&&p&&br&&/p&&p&  如果想要减少feature_2,看看feature_2对模型的准确率影响是否很大,那么我们需要在这张表里去掉这一列,想要增加一个feature的话,也需要在feature里增加一列,如果用reador decorator的话,我们可以这样做数据集:&/p&&figure&&img src=&https://pic1.zhimg.com/v2-2b46d067dee47e80589b6bfb12a72f03_b.jpg& data-caption=&& data-rawwidth=&1754& data-rawheight=&650& class=&origin_image zh-lightbox-thumb& width=&1754& data-original=&https://pic1.zhimg.com/v2-2b46d067dee47e80589b6bfb12a72f03_r.jpg&&&/figure&&p&&br&&/p&&p&  把相同类型的feature放在一起,不用频繁的join减少时间,一共做四个表,创建4个reador:&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&data&/span& &span class=&o&&=&/span& &span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&reader&/span&&span class=&o&&.&/span&&span class=&n&&shuffle&/span&&span class=&p&&(&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&reader&/span&&span class=&o&&.&/span&&span class=&n&&compose&/span&&span class=&p&&(&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&reader&/span&&span class=&p&&(&/span&&span class=&n&&table1&/span&&span class=&p&&(&/span&&span class=&n&&table1_path&/span&&span class=&p&&),&/span&&span class=&n&&buf_size&/span& &span class=&o&&=&/span& &span class=&mi&&100&/span&&span class=&p&&),&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&reader&/span&&span class=&p&&(&/span&&span class=&n&&table2&/span&&span class=&p&&(&/span&&span class=&n&&table2_path&/span&&span class=&p&&),&/span&&span class=&n&&buf_size&/span& &span class=&o&&=&/span& &span class=&mi&&100&/span&&span class=&p&&),&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&reader&/span&&span class=&p&&(&/span&&span class=&n&&table3&/span&&span class=&p&&(&/span&&span class=&n&&table3_path&/span&&span class=&p&&),&/span&&span class=&n&&buf_size&/span& &span class=&o&&=&/span& &span class=&mi&&100&/span&&span class=&p&&),&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&reader&/span&&span class=&p&&(&/span&&span class=&n&&table4&/span&&span class=&p&&(&/span&&span class=&n&&table4_path&/span&&span class=&p&&),&/span&&span class=&n&&buf_size&/span& &span class=&o&&=&/span& &span class=&mi&&100&/span&&span class=&p&&),&/span&
&span class=&mi&&500&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&p&  如果新发现了一个特征,想尝试这个特征对模型提高准确率有没有用,可以再单独把这个特征数据提取出来,再增加一个reader,用reader decorator组合起来,shuffle后放入模型里跑就行了。&/p&&hr&&h2&&b&PaddlePaddle的数据预处理实例&/b&&/h2&&p&还是以手写数字为例,对数据进行处理后并划分train和test,只需要4步即可:&/p&&ol&&li&指定数据地址&/li&&/ol&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&kn&&import&/span& &span class=&nn&&paddle.v2.dataset.common&/span&
&span class=&kn&&import&/span& &span class=&nn&&subprocess&/span&
&span class=&kn&&import&/span& &span class=&nn&&numpy&/span&
&span class=&kn&&import&/span& &span class=&nn&&platform&/span&
&span class=&n&&__all__&/span& &span class=&o&&=&/span& &span class=&p&&[&/span&&span class=&s1&&'train'&/span&&span class=&p&&,&/span& &span class=&s1&&'test'&/span&&span class=&p&&,&/span& &span class=&s1&&'convert'&/span&&span class=&p&&]&/span&
&span class=&n&&URL_PREFIX&/span& &span class=&o&&=&/span& &span class=&s1&&'http://yann.lecun.com/exdb/mnist/'&/span&
&span class=&n&&TEST_IMAGE_URL&/span& &span class=&o&&=&/span& &span class=&n&&URL_PREFIX&/span& &span class=&o&&+&/span& &span class=&s1&&'t10k-images-idx3-ubyte.gz'&/span&
&span class=&n&&TEST_IMAGE_MD5&/span& &span class=&o&&=&/span& &span class=&s1&&'9fb629cd022fa330f9573f3'&/span&
&span class=&n&&TEST_LABEL_URL&/span& &span class=&o&&=&/span& &span class=&n&&URL_PREFIX&/span& &span class=&o&&+&/span& &span class=&s1&&'t10k-labels-idx1-ubyte.gz'&/span&
&span class=&n&&TEST_LABEL_MD5&/span& &span class=&o&&=&/span& &span class=&s1&&'ec29112dd5afab7f02629c'&/span&
&span class=&n&&TRAIN_IMAGE_URL&/span& &span class=&o&&=&/span& &span class=&n&&URL_PREFIX&/span& &span class=&o&&+&/span& &span class=&s1&&'train-images-idx3-ubyte.gz'&/span&
&span class=&n&&TRAIN_IMAGE_MD5&/span& &span class=&o&&=&/span& &span class=&s1&&'f68b3c2dcbeaaa9fbdd348bbdeb94873'&/span&
&span class=&n&&TRAIN_LABEL_URL&/span& &span class=&o&&=&/span& &span class=&n&&URL_PREFIX&/span& &span class=&o&&+&/span& &span class=&s1&&'train-labels-idx1-ubyte.gz'&/span&
&span class=&n&&TRAIN_LABEL_MD5&/span& &span class=&o&&=&/span& &span class=&s1&&'d53e105ee54ea40749a09fcbcd1e9432'&/span&
&/code&&/pre&&/div&&p&  2.创建reader creator&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&k&&def&/span& &span class=&nf&&reader_creator&/span&&span class=&p&&(&/span&&span class=&n&&image_filename&/span&&span class=&p&&,&/span& &span class=&n&&label_filename&/span&&span class=&p&&,&/span& &span class=&n&&buffer_size&/span&&span class=&p&&):&/span&
&span class=&c1&&# 创建一个reader&/span&
&span class=&k&&def&/span& &span class=&nf&&reader&/span&&span class=&p&&():&/span&
&span class=&k&&if&/span& &span class=&n&&platform&/span&&span class=&o&&.&/span&&span class=&n&&system&/span&&span class=&p&&()&/span& &span class=&o&&==&/span& &span class=&s1&&'Darwin'&/span&&span class=&p&&:&/span&
&span class=&n&&zcat_cmd&/span& &span class=&o&&=&/span& &span class=&s1&&'gzcat'&/span&
&span class=&k&&elif&/span& &span class=&n&&platform&/span&&span class=&o&&.&/span&&span class=&n&&system&/span&&span class=&p&&()&/span& &span class=&o&&==&/span& &span class=&s1&&'Linux'&/span&&span class=&p&&:&/span&
&span class=&n&&zcat_cmd&/span& &span class=&o&&=&/span& &span class=&s1&&'zcat'&/span&
&span class=&k&&else&/span&&span class=&p&&:&/span&
&span class=&k&&raise&/span& &span class=&ne&&NotImplementedError&/span&&span class=&p&&()&/span&
&span class=&n&&m&/span& &span class=&o&&=&/span& &span class=&n&&subprocess&/span&&span class=&o&&.&/span&&span class=&n&&Popen&/span&&span class=&p&&([&/span&&span class=&n&&zcat_cmd&/span&&span class=&p&&,&/span& &span class=&n&&image_filename&/span&&span class=&p&&],&/span& &span class=&n&&stdout&/span&&span class=&o&&=&/span&&span class=&n&&subprocess&/span&&span class=&o&&.&/span&&span class=&n&&PIPE&/span&&span class=&p&&)&/span&
&span class=&n&&m&/span&&span class=&o&&.&/span&&span class=&n&&stdout&/span&&span class=&o&&.&/span&&span class=&n&&read&/span&&span class=&p&&(&/span&&span class=&mi&&16&/span&&span class=&p&&)&/span&
&span class=&n&&l&/span& &span class=&o&&=&/span& &span class=&n&&subprocess&/span&&span class=&o&&.&/span&&span class=&n&&Popen&/span&&span class=&p&&([&/span&&span class=&n&&zcat_cmd&/span&&span class=&p&&,&/span& &span class=&n&&label_filename&/span&&span class=&p&&],&/span& &span class=&n&&stdout&/span&&span class=&o&&=&/span&&span class=&n&&subprocess&/span&&span class=&o&&.&/span&&span class=&n&&PIPE&/span&&span class=&p&&)&/span&
&span class=&n&&l&/span&&span class=&o&&.&/span&&span class=&n&&stdout&/span&&span class=&o&&.&/span&&span class=&n&&read&/span&&span class=&p&&(&/span&&span class=&mi&&8&/span&&span class=&p&&)&/span&
&span class=&k&&try&/span&&span class=&p&&:&/span&
&span class=&c1&&# reader could be break.&/span&
&span class=&k&&while&/span& &span class=&bp&&True&/span&&span class=&p&&:&/span&
&span class=&n&&labels&/span& &span class=&o&&=&/span& &span class=&n&&numpy&/span&&span class=&o&&.&/span&&span class=&n&&fromfile&/span&&span class=&p&&(&/span&
&span class=&n&&l&/span&&span class=&o&&.&/span&&span class=&n&&stdout&/span&&span class=&p&&,&/span& &span class=&s1&&'ubyte'&/span&&span class=&p&&,&/span& &span class=&n&&count&/span&&span class=&o&&=&/span&&span class=&n&&buffer_size&/span&&span class=&p&&)&/span&&span class=&o&&.&/span&&span class=&n&&astype&/span&&span class=&p&&(&/span&&span class=&s2&&&int&&/span&&span class=&p&&)&/span&
&span class=&k&&if&/span& &span class=&n&&labels&/span&&span class=&o&&.&/span&&span class=&n&&size&/span& &span class=&o&&!=&/span& &span class=&n&&buffer_size&/span&&span class=&p&&:&/span&
&span class=&k&&break&/span&
&span class=&c1&&# numpy.fromfile returns empty slice after EOF.&/span&
&span class=&n&&images&/span& &span class=&o&&=&/span& &span class=&n&&numpy&/span&&span class=&o&&.&/span&&span class=&n&&fromfile&/span&&span class=&p&&(&/span&
&span class=&n&&m&/span&&span class=&o&&.&/span&&span class=&n&&stdout&/span&&span class=&p&&,&/span& &span class=&s1&&'ubyte'&/span&&span class=&p&&,&/span& &span class=&n&&count&/span&&span class=&o&&=&/span&&span class=&n&&buffer_size&/span& &span class=&o&&*&/span& &span class=&mi&&28&/span& &span class=&o&&*&/span& &span class=&mi&&28&/span&&span class=&p&&)&/span&&span class=&o&&.&/span&&span class=&n&&reshape&/span&&span class=&p&&(&/span&
&span class=&p&&(&/span&&span class=&n&&buffer_size&/span&&span class=&p&&,&/span& &span class=&mi&&28&/span& &span class=&o&&*&/span& &span class=&mi&&28&/span&&span class=&p&&))&/span&&span class=&o&&.&/span&&span class=&n&&astype&/span&&span class=&p&&(&/span&&span class=&s1&&'float32'&/span&&span class=&p&&)&/span&
&span class=&n&&images&/span& &span class=&o&&=&/span& &span class=&n&&images&/span& &span class=&o&&/&/span& &span class=&mf&&255.0&/span& &span class=&o&&*&/span& &span class=&mf&&2.0&/span& &span class=&o&&-&/span& &span class=&mf&&1.0&/span&
&span class=&k&&for&/span& &span class=&n&&i&/span& &span class=&ow&&in&/span& &span class=&nb&&xrange&/span&&span class=&p&&(&/span&&span class=&n&&buffer_size&/span&&span class=&p&&):&/span&
&span class=&k&&yield&/span& &span class=&n&&images&/span&&span class=&p&&[&/span&&span class=&n&&i&/span&&span class=&p&&,&/span& &span class=&p&&:],&/span& &span class=&nb&&int&/span&&span class=&p&&(&/span&&span class=&n&&labels&/span&&span class=&p&&[&/span&&span class=&n&&i&/span&&span class=&p&&])&/span&
&span class=&k&&finally&/span&&span class=&p&&:&/span&
&span class=&n&&m&/span&&span class=&o&&.&/span&&span class=&n&&terminate&/span&&span class=&p&&()&/span&
&span class=&n&&l&/span&&span class=&o&&.&/span&&span class=&n&&terminate&/span&&span class=&p&&()&/span&
&span class=&k&&return&/span& &span class=&n&&reader&/span&
&/code&&/pre&&/div&&p&&br&&/p&&p&  3.创建训练集和测试集&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&k&&def&/span& &span class=&nf&&train&/span&&span class=&p&&():&/span&
&span class=&sd&&&&&&/span&
&span class=&sd&&
创建mnsit的训练集 reader creator&/span&
&span class=&sd&&
返回一个reador creator,每个reader里的样本都是图片的像素值,在区间[0,1]内,label为0~9&/span&
&span class=&sd&&
返回:training reader creator&/span&
&span class=&sd&&
&&&&/span&
&span class=&k&&return&/span& &span class=&n&&reader_creator&/span&&span class=&p&&(&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&v2&/span&&span class=&o&&.&/span&&span class=&n&&dataset&/span&&span class=&o&&.&/span&&span class=&n&&common&/span&&span class=&o&&.&/span&&span class=&n&&download&/span&&span class=&p&&(&/span&&span class=&n&&TRAIN_IMAGE_URL&/span&&span class=&p&&,&/span& &span class=&s1&&'mnist'&/span&&span class=&p&&,&/span&
&span class=&n&&TRAIN_IMAGE_MD5&/span&&span class=&p&&),&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&v2&/span&&span class=&o&&.&/span&&span class=&n&&dataset&/span&&span class=&o&&.&/span&&span class=&n&&common&/span&&span class=&o&&.&/span&&span class=&n&&download&/span&&span class=&p&&(&/span&&span class=&n&&TRAIN_LABEL_URL&/span&&span class=&p&&,&/span& &span class=&s1&&'mnist'&/span&&span class=&p&&,&/span&
&span class=&n&&TRAIN_LABEL_MD5&/span&&span class=&p&&),&/span& &span class=&mi&&100&/span&&span class=&p&&)&/span&
&span class=&k&&def&/span& &span class=&nf&&test&/span&&span class=&p&&():&/span&
&span class=&sd&&&&&&/span&
&span class=&sd&&
创建mnsit的测试集 reader creator&/span&
&span class=&sd&&
返回一个reador creator,每个reader里的样本都是图片的像素值,在区间[0,1]内,label为0~9&/span&
&span class=&sd&&
返回:testreader creator&/span&
&span class=&sd&&
&&&&/span&
&span class=&k&&return&/span& &span class=&n&&reader_creator&/span&&span class=&p&&(&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&v2&/span&&span class=&o&&.&/span&&span class=&n&&dataset&/span&&span class=&o&&.&/span&&span class=&n&&common&/span&&span class=&o&&.&/span&&span class=&n&&download&/span&&span class=&p&&(&/span&&span class=&n&&TEST_IMAGE_URL&/span&&span class=&p&&,&/span& &span class=&s1&&'mnist'&/span&&span class=&p&&,&/span&
&span class=&n&&TEST_IMAGE_MD5&/span&&span class=&p&&),&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&v2&/span&&span class=&o&&.&/span&&span class=&n&&dataset&/span&&span class=&o&&.&/span&&span class=&n&&common&/span&&span class=&o&&.&/span&&span class=&n&&download&/span&&span class=&p&&(&/span&&span class=&n&&TEST_LABEL_URL&/span&&span class=&p&&,&/span& &span class=&s1&&'mnist'&/span&&span class=&p&&,&/span&
&span class=&n&&TEST_LABEL_MD5&/span&&span class=&p&&),&/span& &span class=&mi&&100&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&p&&br&&/p&&p& 4.下载数据并转换成相应格式&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&k&&def&/span& &span class=&nf&&fetch&/span&&span class=&p&&():&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&v2&/span&&span class=&o&&.&/span&&span class=&n&&dataset&/span&&span class=&o&&.&/span&&span class=&n&&common&/span&&span class=&o&&.&/span&&span class=&n&&download&/span&&span class=&p&&(&/span&&span class=&n&&TRAIN_IMAGE_URL&/span&&span class=&p&&,&/span& &span class=&s1&&'mnist'&/span&&span class=&p&&,&/span& &span class=&n&&TRAIN_IMAGE_MD5&/span&&span class=&p&&)&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&v2&/span&&span class=&o&&.&/span&&span class=&n&&dataset&/span&&span class=&o&&.&/span&&span class=&n&&common&/span&&span class=&o&&.&/span&&span class=&n&&download&/span&&span class=&p&&(&/span&&span class=&n&&TRAIN_LABEL_URL&/span&&span class=&p&&,&/span& &span class=&s1&&'mnist'&/span&&span class=&p&&,&/span& &span class=&n&&TRAIN_LABEL_MD5&/span&&span class=&p&&)&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&v2&/span&&span class=&o&&.&/span&&span class=&n&&dataset&/span&&span class=&o&&.&/span&&span class=&n&&common&/span&&span class=&o&&.&/span&&span class=&n&&download&/span&&span class=&p&&(&/span&&span class=&n&&TEST_IMAGE_URL&/span&&span class=&p&&,&/span& &span class=&s1&&'mnist'&/span&&span class=&p&&,&/span& &span class=&n&&TEST_IMAGE_MD5&/span&&span class=&p&&)&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&v2&/span&&span class=&o&&.&/span&&span class=&n&&dataset&/span&&span class=&o&&.&/span&&span class=&n&&common&/span&&span class=&o&&.&/span&&span class=&n&&download&/span&&span class=&p&&(&/span&&span class=&n&&TEST_LABEL_URL&/span&&span class=&p&&,&/span& &span class=&s1&&'mnist'&/span&&span class=&p&&,&/span& &span class=&n&&TRAIN_LABEL_MD5&/span&&span class=&p&&)&/span&
&span class=&k&&def&/span& &span class=&nf&&convert&/span&&span class=&p&&(&/span&&span class=&n&&path&/span&&span class=&p&&):&/span&
&span class=&sd&&&&&&/span&
&span class=&sd&&
将数据格式转换为 recordio format&/span&
&span class=&sd&&
&&&&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&v2&/span&&span class=&o&&.&/span&&span class=&n&&dataset&/span&&span class=&o&&.&/span&&span class=&n&&common&/span&&span class=&o&&.&/span&&span class=&n&&convert&/span&&span class=&p&&(&/span&&span class=&n&&path&/span&&span class=&p&&,&/span& &span class=&n&&train&/span&&span class=&p&&(),&/span& &span class=&mi&&1000&/span&&span class=&p&&,&/span& &span class=&s2&&&minist_train&&/span&&span class=&p&&)&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&v2&/span&&span class=&o&&.&/span&&span class=&n&&dataset&/span&&span class=&o&&.&/span&&span class=&n&&common&/span&&span class=&o&&.&/span&&span class=&n&&convert&/span&&span class=&p&&(&/span&&span class=&n&&path&/span&&span class=&p&&,&/span& &span class=&n&&test&/span&&span class=&p&&(),&/span& &span class=&mi&&1000&/span&&span class=&p&&,&/span& &span class=&s2&&&minist_test&&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&p&&br&&/p&&p& 如果想换成自己的训练数据,只需要按照步骤改成自己的数据地址,创建相应的reader creator(或者reader decorator)即可。&/p&&p&&br&&/p&&p&  这是图像的例子,如果我们想训练一个文本模型,做一个情感分析,这个时候如何处理数据呢?步骤也很简单。&/p&&p&  假设我们有一堆数据,每一行为一条样本,以 &code&\t&/code& 分隔,第一列是类别标签,第二列是输入文本的内容,文本内容中的词语以空格分隔。以下是两条示例数据:&/p&&blockquote&positive
今天终于试了自己理想的车 外观太骚气了 而且中控也很棒&br&negative
这台车好贵 而且还费油 性价比太低了&/blockquote&&p&现在开始做数据预处理&/p&&p&  1.创建reader&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&k&&def&/span& &span class=&nf&&train_reader&/span&&span class=&p&&(&/span&&span class=&n&&data_dir&/span&&span class=&p&&,&/span& &span class=&n&&word_dict&/span&&span class=&p&&,&/span& &span class=&n&&label_dict&/span&&span class=&p&&):&/span&
&span class=&k&&def&/span& &span class=&nf&&reader&/span&&span class=&p&&():&/span&
&span class=&n&&UNK_ID&/span& &span class=&o&&=&/span& &span class=&n&&word_dict&/span&&span class=&p&&[&/span&&span class=&s2&&&&UNK&&&/span&&span class=&p&&]&/span&
&span class=&n&&word_col&/span& &span class=&o&&=&/span& &span class=&mi&&0&/span&
&span class=&n&&lbl_col&/span& &span class=&o&&=&/span& &span class=&mi&&1&/span&
&span class=&k&&for&/span& &span class=&n&&file_name&/span& &span class=&ow&&in&/span& &span class=&n&&os&/span&&span class=&o&&.&/span&&span class=&n&&listdir&/span&&span class=&p&&(&/span&&span class=&n&&data_dir&/span&&span class=&p&&):&/span&
&span class=&k&&with&/span& &span class=&nb&&open&/span&&span class=&p&&(&/span&&span class=&n&&os&/span&&span class=&o&&.&/span&&span class=&n&&path&/span&&span class=&o&&.&/span&&span class=&n&&join&/span&&span class=&p&&(&/span&&span class=&n&&data_dir&/span&&span class=&p&&,&/span& &span class=&n&&file_name&/span&&span class=&p&&),&/span& &span class=&s2&&&r&&/span&&span class=&p&&)&/span& &span class=&k&&as&/span& &span class=&n&&f&/span&&span class=&p&&:&/span&
&span class=&k&&for&/span& &span class=&n&&line&/span& &span class=&ow&&in&/span& &span class=&n&&f&/span&&span class=&p&&:&/span&
&span class=&n&&line_split&/span& &span class=&o&&=&/span& &span class=&n&&line&/span&&span class=&o&&.&/span&&span class=&n&&strip&/span&&span class=&p&&()&/span&&span class=&o&&.&/span&&span class=&n&&split&/span&&span class=&p&&(&/span&&span class=&s2&&&&/span&&span class=&se&&\t&/span&&span class=&s2&&&&/span&&span class=&p&&)&/span&
&span class=&n&&word_ids&/span& &span class=&o&&=&/span& &span class=&p&&[&/span&
&span class=&n&&word_dict&/span&&span class=&o&&.&/span&&span class=&n&&get&/span&&span class=&p&&(&/span&&span class=&n&&w&/span&&span class=&p&&,&/span& &span class=&n&&UNK_ID&/span&&span class=&p&&)&/span&
&span class=&k&&for&/span& &span class=&n&&w&/span& &span class=&ow&&in&/span& &span class=&n&&line_split&/span&&span class=&p&&[&/span&&span class=&n&&word_col&/span&&span class=&p&&]&/span&&span class=&o&&.&/span&&span class=&n&&split&/span&&span class=&p&&()&/span&
&span class=&p&&]&/span&
&span class=&k&&yield&/span& &span class=&n&&word_ids&/span&&span class=&p&&,&/span& &span class=&n&&label_dict&/span&&span class=&p&&[&/span&&span class=&n&&line_split&/span&&span class=&p&&[&/span&&span class=&n&&lbl_col&/span&&span class=&p&&]]&/span&
&span class=&k&&return&/span& &span class=&n&&reader&/span&
&/code&&/pre&&/div&&p&  返回类型为: &code&paddle.data_type.integer_value_sequence&/code&(词语在字典的序号)和 &code&paddle.data_type.integer_value&/code&(类别标签)&/p&&p&  2.组合读取方式&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&train_reader&/span& &span class=&o&&=&/span& &span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&batch&/span&&span class=&p&&(&/span&
&span class=&n&&paddle&/span&&span class=&o&&.&/span&&span class=&n&&reader&/span&&span class=&o&&.&/span&&span class=&n&&shuffle&/span&&span class=&p&&(&/span&
&span class=&n&&reader&/span&&span class=&o&&.&/span&&span class=&n&&train_reader&/span&&span class=&p&&(&/span&&span class=&n&&train_data_dir&/span&&span class=&p&&,&/span& &span class=&n&&word_dict&/span&&span class=&p&&,&/span& &span class=&n&&lbl_dict&/span&&span class=&p&&),&/span&
&span class=&n&&buf_size&/span&&span class=&o&&=&/span&&span class=&mi&&1000&/span&&span class=&p&&),&/span&
&span class=&n&&batch_size&/span&&span class=&o&&=&/span&&span class=&n&&batch_size&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&p&&br&&/p&&p&  完整的代码如下(加上了划分train和test部分):&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&kn&&import&/span& &span class=&nn&&os&/span&
&span class=&k&&def&/span& &span class=&nf&&train_reader&/span&&span class=&p&&(&/span&&span class=&n&&data_dir&/span&&span class=&p&&,&/span& &span class=&n&&word_dict&/span&&span class=&p&&,&/span& &span class=&n&&label_dict&/span&&span class=&p&&):&/span&
&span class=&sd&&&&&&/span&
&span class=&sd&&
创建训练数据reader&/span&
&span class=&sd&&
:param data_dir: 数据地址.&/span&
&span class=&sd&&
:type data_dir: str&/span&
&span class=&sd&&
:param word_dict: 词典地址,&/span&
&span class=&sd&&
词典里必须有 &UNK& .&/span&
&span class=&sd&&
:type word_dict:python dict&/span&
&span class=&sd&&
:param label_dict: label 字典的地址&/span&
&span class=&sd&&
:type label_dict: Python dict&/span&
&span class=&sd&&
&&&&/span&
&span class=&k&&def&/span& &span class=&nf&&reader&/span&&span class=&p&&():&/span&
&span class=&n&&UNK_ID&/span& &span class=&o&&=&/span& &span class=&n&&word_dict&/span&&span class=&p&&[&/span&&span class=&s2&&&&UNK&&&/span&&span class=&p&&]&/span&
&span class=&n&&word_col&/span& &span class=&o&&=&/span& &span class=&mi&&1&/span&
&span class=&n&&lbl_col&/span& &span class=&o&&=&/span& &span class=&mi&&0&/span&
&span class=&k&&for&/span& &span class=&n&&file_name&/span& &span class=&ow&&in&/span& &span class=&n&&os&/span&&span class=&o&&.&/span&&span class=&n&&listdir&/span&&span class=&p&&(&/span&&span class=&n&&data_dir&/span&&span class=&p&&):&/span&
&span class=&k&&with&/span& &span class=&nb&&open&/span&&span class=&p&&(&/span&&span class=&n&&os&/span&&span class=&o&&.&/span&&span class=&n&&path&/span&&span class=&o&&.&/span&&span class=&n&&join&/span&&span class=&p&&(&/span&&span class=&n&&data_dir&/span&&span class=&p&&,&/span& &span class=&n&&file_name&/span&&span class=&p&&),&/span& &span class=&s2&&&r&&/span&&span class=&p&&)&/span& &span class=&k&&as&/span& &span class=&n&&f&/span&&span class=&p&&:&/span&
&span class=&k&&for&/span& &span class=&n&&line&/span& &span class=&ow&&in&/span& &span class=&n&&f&/span&&span class=&p&&:&/span&
&span class=&n&&line_split&/span& &span class=&o&&=&/span& &span class=&n&&line&/span&&span class=&o&&.&/span&&span class=&n&&strip&/span&&span class=&p&&()&/span&&span class=&o&&.&/span&&span class=&n&&split&/span&&span class=&p&&(&/span&&span class=&s2&&&&/span&&span class=&se&&\t&/span&&span class=&s2&&&&/span&&span class=&p&&)&/span&
&span class=&n&&word_ids&/span& &span class=&o&&=&/span& &span class=&p&&[&/span&
&span class=&n&&word_dict&/span&&span class=&o&&.&/span&&span class=&n&&get&/span&&span class=&p&&(&/span&&span class=&n&&w&/span&&span class=&p&&,&/span& &span class=&n&&UNK_ID&/span&&span class=&p&&)&/span&
&span class=&k&&for&/span& &span class=&n&&w&/span& &span class=&ow&&in&/span& &span class=&n&&line_split&/span&&span class=&p&&[&/span&&span class=&n&&word_col&/span&&span class=&p&&]&/span&&span class=&o&&.&/span&&span class=&n&&split&/span&&span class=&p&&()&/span&
&span class=&p&&]&/span&
&span class=&k&&yield&/span& &span class=&n&&word_ids&/span&&span class=&p&&,&/span& &span class=&n&&label_dict&/span&&span class=&p&&[&/span&&span class=&n&&line_split&/span&&span class=&p&&[&/span&&span class=&n&&lbl_col&/span&&span class=&p&&]]&/span&
&span class=&k&&return&/span& &span class=&n&&reader&/span&
&span class=&k&&def&/span& &span class=&nf&&test_reader&/span&&span class=&p&&(&/span&&span class=&n&&data_dir&/span&&span class=&p&&,&/span& &span class=&n&&word_dict&/span&&span class=&p&&):&/span&
&span class=&sd&&&&&&/span&
&span class=&sd&&
创建测试数据reader&/span&
&span class=&sd&&
:param data_dir: 数据地址.&/span&
&span class=&sd&&
:type data_dir: str&/span&
&span class=&sd&&
:param word_dict: 词典地址,&/span&
&span class=&sd&&
词典里必须有 &UNK& .&/span&
&span class=&sd&&
:type word_dict:python dict&/span&
&span class=&sd&&
&&&&/span&
&span class=&k&&def&/span& &span class=&nf&&reader&/span&&span class=&p&&():&/span&
&span class=&n&&UNK_ID&/span& &span class=&o&&=&/span& &span class=&n&&word_dict&/span&&span class=&p&&[&/span&&span class=&s2&&&&UNK&&&/span&&span class=&p&&]&/span&
&span class=&n&&word_col&/span& &span class=&o&&=&/span& &span class=&mi&&1&/span&
&span class=&k&&for&/span& &span class=&n&&file_name&/span& &span class=&ow&&in&/span& &span class=&n&&os&/span&&span class=&o&&.&/span&&span class=&n&&listdir&/span&&span class=&p&&(&/span&&span class=&n&&data_dir&/span&&span class=&p&&):&/span&
&span class=&k&&with&/span& &span class=&nb&&open&/span&&span class=&p&&(&/span&&span class=&n&&os&/span&&span class=&o&&.&/span&&span class=&n&&path&/span&&span class=&o&&.&/span&&span class=&n&&join&/span&&span class=&p&&(&/span&&span class=&n&&data_dir&/span&&span class=&p&&,&/span& &span class=&n&&file_name&/span&&span class=&p&&),&/span& &span class=&s2&&&r&&/span&&span class=&p&&)&/span& &span class=&k&&as&/span& &span class=&n&&f&/span&&span class=&p&&:&/span&
&span class=&k&&for&/span& &span class=&n&&line&/span& &span class=&ow&&in&/span& &span class=&n&&f&/span&&span class=&p&&:&/span&
&span class=&n&&line_split&/span& &span class=&o&&=&/span& &span class=&n&&line&/span&&span class=&o&&.&/span&&span class=&n&&strip&/span&&span class=&p&&()&/span&&span class=&o&&.&/span&&span class=&n&&split&/span&&span class=&p&&(&/span&&span class=&s2&&&&/span&&span class=&se&&\t&/span&&span class=&s2&&&&/span&&span class=&p&&)&/span&
&span class=&k&&if&/span& &span class=&nb&&len&/span&&span class=&p&&(&/span&&span class=&n&&line_split&/span&&span class=&p&&)&/span& &span class=&o&&&&/span& &span class=&n&&word_col&/span&&span class=&p&&:&/span& &span class=&k&&continue&/span&
&span class=&n&&word_ids&/span& &span class=&o&&=&/span& &span class=&p&&[&/span&
&span class=&n&&word_dict&/span&&span class=&o&&.&/span&&span class=&n&&get&/span&&span class=&p&&(&/span&&span class=&n&&w&/span&&span class=&p&&,&/span& &span class=&n&&UNK_ID&/span&&span class=&p&&)&/span&
&span class=&k&&for&/span& &span class=&n&&w&/span& &span class=&ow&&in&/span& &span class=&n&&line_split&/span&&span class=&p&&[&/span&&span class=&n&&word_col&/span&&span class=&p&&]&/span&&span class=&o&&.&/span&&span class=&n&&split&/span&&span class=&p&&()&/span&
&span class=&p&&]&/span&
&span class=&k&&yield&/span& &span class=&n&&word_ids&/span&&span class=&p&&,&/span& &span class=&n&&line_split&/span&&span class=&p&&[&/span&&span class=&n&&word_col&/span&&span class=&p&&]&/span&
&span class=&k&&return&/span& &span class=&n&&reader&/span&
&/code&&/pre&&/div&&hr&&p&&b&总结 &/b&&/p&&p&
这篇文章主要讲了在paddlepaddle里如何加载自己的数据集,转换成相应的格式,并划分train和test。我们在使用一个框架的时候通常会先去跑几个简单的demo,但是如果不用常见的demo的数据,自己做一个实际的项目,完整的跑通一个模型,这才代表我们掌握了这个框架的基本应用知识。跑一个模型第一步就是数据预处理,在paddlepaddle里,提供的方式非常简单,但是有很多优点:&/p&&ul&&li&  shuffle数据非常方便&/li&&li&  可以将数据组合成batch训练&/li&&li&  可以利用reader decorator来组合多个reader,提高组合特征运行模型的效率&/li&&li&  可以多线程读取数据&/li&&/ul&&p&  而我之前使用过mxnet来训练车牌识别的模型,50w的图片数据想要一次训练是非常慢的,这样的话就有两个解决方法:一是批量训练,这一点大多数的框架都会有, 二是转换成mxnet特有的rec格式,提高读取效率,可以通过im2rec.py将图片转换,比较麻烦,如果是tesnorflow,也有相对应的特定格式tfrecord,这几种方式各有优劣,从易用性上,paddlepaddle是比较简单的。&/p&&p&  这篇文章没有与上篇衔接起来,因为看到有好几封邮件都有问怎么自己加载数据训练,所以就决定插入一节先把这个写了。下篇文章我们接着讲CNN的进阶知识。下周见^_^!&/p&&p&&br&&/p&&p&&br&&/p&&p&参考文章:&/p&&p&1.官网说明:&a href=&https://link.zhihu.com/?target=http%3A//doc.paddlepaddle.org/develop/doc_cn/getstarted/concepts/use_concepts_cn.html& class=& external& target=&_blank& rel=&nofollow noreferrer&&&span class=&invisible&&http://&/span&&span class=&visible&&doc.paddlepaddle.org/de&/span&&span class=&invisible&&velop/doc_cn/getstarted/concepts/use_concepts_cn.html&/span&&span class=&ellipsis&&&/span&&/a&&/p&
上篇文章讲了,本来这篇文章准备继续深入讲CNN的相关知识和手写CNN,但是有很多同学跟我发邮件或私信问我关于PaddlePaddle如何读取数据、做数据预处理相关的内容。网上看的很多教程都是几个常见的例子,数据集不需要自己准备,所以不…
&figure&&img src=&https://pic3.zhimg.com/v2-5e9231fdcffd548a9981_b.jpg& data-rawwidth=&1280& data-rawheight=&800& class=&origin_image zh-lightbox-thumb& width=&1280& data-original=&https://pic3.zhimg.com/v2-5e9231fdcffd548a9981_r.jpg&&&/figure&&p&原文:&a href=&https://link.zhihu.com/?target=https%3A//medium.com/technology-invention-and-more/how-to-build-a-multi-layered-neural-network-in-python-53ec3d1d326a%23.7yns8c5af& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&How to build a multi-layered neural network in Python&/a&&/p&&p&作者:&a href=&https://link.zhihu.com/?target=https%3A//medium.com/%40miloharper%3Fsource%3Dpost_header_lockup& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Milo Spencer-Harper&/a&&/p&&p&翻译:Kaiser(&a href=&https://link.zhihu.com/?target=http%3A//weibo.com/kaiser0730& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&王司图&/a&)&/p&&p&代码可调版本:&a href=&https://link.zhihu.com/?target=https%3A//jizhi.im/blog/post/nn_py_ml& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Python搭建多层神经网络 - 集智专栏&/a&&/p&&br&&h2&前言&/h2&&p&在&a href=&https://link.zhihu.com/?target=https%3A//jizhi.im/blog/post/nn_py_9& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&之前的文章&/a&中,Milo带领我们用简单的9行代码搭建了一个最简单的神经网络——只有一个神经元。&/p&&p&这样的网络虽然简单好理解,但却是个一根筋,能解决的问题都太简单、有时候幼稚,对于稍微复杂点的局面就应付不了了。&/p&&p&比如在集智的文章评论区中,有朋友尝试了修改在线编辑器的参数重新定义了问题:“第一列与第三列同时为1-&1,否则-&0”,但经过训练后在[1 0 0]这个测试样本上仍然得到的是输出1。&/p&&figure&&img src=&https://pic3.zhimg.com/v2-bfcff56d8efe2_b.jpg& data-rawwidth=&1332& data-rawheight=&578& class=&origin_image zh-lightbox-thumb& width=&1332& data-original=&https://pic3.zhimg.com/v2-bfcff56d8efe2_r.jpg&&&/figure&&br&&p&首先我们来测试一下,可以发现调整训练集之后,输出的结果虽然有变化,但仍然是一个非常接近1的数。&/p&&figure&&img src=&https://pic1.zhimg.com/v2-b471c344d530bf2f68ad4_b.jpg& data-rawwidth=&1310& data-rawheight=&718& class=&origin_image zh-lightbox-thumb& width=&1310& data-original=&https://pic1.zhimg.com/v2-b471c344d530bf2f68ad4_r.jpg&&&/figure&&p&运行完整代码看看具体的权重值,可见第三列的权重值极小接近于0,而第一、二列的权重值绝对值相当。所以我们的这个单细胞神经网络,并没有“学会”期望的逻辑关系。 &/p&&figure&&img src=&https://pic4.zhimg.com/v2-de4e022a1d45e799fbd872e_b.jpg& data-rawwidth=&1304& data-rawheight=&462& class=&origin_image zh-lightbox-thumb& width=&1304& data-original=&https://pic4.zhimg.com/v2-de4e022a1d45e799fbd872e_r.jpg&&&/figure&&br&&p&这是因为,“第一列与第三列同时为1-&1,否则-&0”是一个非线性关系。与上一篇的“输入=第一列”不同,后者是高度线性的(不能再线性了)。要解决更复杂的非线性问题,就需要把多个神经元连接起来,真正形成“网络”。&/p&&p&在Milo的原文中,他也提出了一个非常类似的问题,就是“异或”。&/p&&br&&h2&正文&/h2&&p&下表的&?&处应该是什么?&/p&&figure&&img src=&https://pic3.zhimg.com/v2-32dfdb9b467ecb39d26b3ad152b4e8d2_b.jpg& data-rawwidth=&682& data-rawheight=&436& class=&origin_image zh-lightbox-thumb& width=&682& data-original=&https://pic3.zhimg.com/v2-32dfdb9b467ecb39d26b3ad152b4e8d2_r.jpg&&&/figure&&p&经过观察可以发现,第三列示无关的,而前两列成“异或”关系——相等为0,相异为1。所以正确答案应为0。&/p&&p&对于单个神经元来说,这样的线性关系太复杂了,输入-输出之间没有一对一的映射关系。所以我们必须加入一个含4个神经元的隐藏层(Layer 1),这一层使得神经网络能够思考输入的组合问题。&/p&&p&&figure&&img src=&https://pic2.zhimg.com/v2-a96c154b6c052ffd49edc_b.jpg& data-rawwidth=&423& data-rawheight=&434& class=&origin_image zh-lightbox-thumb& width=&423& data-original=&https://pic2.zhimg.com/v2-a96c154b6c052ffd49edc_r.jpg&&&/figure&蓝线代表神经突触,图来自&a href=&https://link.zhihu.com/?target=https%3A//github.com/miloharper/visualise-neural-network& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&miloharper/visualise-neural-network&/a&&br&&/p&&p&由图可见,Layer 1的输出给了Layer 2,如此神经网络就可以学习Layer 1的输出和训练集的输出之间的关系。在学习过程中,这些关系会随着两层的权重调整而加强。&/p&&p&实际上,图像识别的原理就很相似。一个像素点和苹果之间并没有直接关系,但是&strong&像素点组合&/strong&起来,就和苹果发生了关系。&/p&&figure&&img src=&https://pic1.zhimg.com/v2-6e3bbc869efd6a689af71b6_b.jpg& data-rawwidth=&532& data-rawheight=&604& class=&origin_image zh-lightbox-thumb& width=&532& data-original=&https://pic1.zhimg.com/v2-6e3bbc869efd6a689af71b6_r.jpg&&&/figure&&figure&&img src=&https://pic1.zhimg.com/v2-1f6d848c43b753e20d00d1_b.jpg& data-rawwidth=&160& data-rawheight=&160& class=&content_image& width=&160&&&/figure&&p&往神经网络中加更多的层,使其思考状态组合,这就是“深度学习”。首先放出代码,之后我会进一步详解。&/p&&figure&&img src=&https://pic1.zhimg.com/v2-502bdca3d263e23e47a08_b.jpg& data-rawwidth=&668& data-rawheight=&2575& class=&origin_image zh-lightbox-thumb& width=&668& data-original=&https://pic1.zhimg.com/v2-502bdca3d263e23e47a08_r.jpg&&&/figure&&br&&p&跟上一版代码最大的不同在于,这次有多层。当神经网络计算第二层的误差时,这个误差会被反向传播回第一层,并影响权重值的调整。这就是反向传播算法(Back Propagation)。&/p&&p&点击&b&运行&/b&键(需访问:&a href=&https://link.zhihu.com/?target=https%3A//jizhi.im/blog/post/nn_py_ml& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Python搭建多层神经网络 - 集智专栏&/a&),观察输出结果,这次的输出会比较多,主要看最后的预测结果。我们得到了0.0078876,这与正确答案0非常接近了。&/p&&p&虽然看起来很轻松,其实计算机在背后执行了大量的矩阵运算,而且这个过程不是很容易可视化。在下一篇文章中,我将把我们的神经网络的神经元和突触都做个可视化,让我们看看她究竟是如何思考的。&/p&&h2&后记&/h2&&p&现在我们已经有了一个可以思考非线性关系的神经网络,那么回到开头的那个问题,能否识别出“第一列与第三列同时为1-&1,否则-&0”的关系呢?&/p&&p&请将相应的代码替换为:&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&training_set_inputs&/span& &span class=&o&&=&/span& &span class=&n&&array&/span&&span class=&p&&([[&/span&&span class=&mi&&0&/span&&span class=&p&&,&/span& &span class=&mi&&0&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p&&],&/span& &span class=&p&&[&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p&&],&/span& &span class=&p&&[&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&0&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p&&],&/span& &span class=&p&&[&/span&&span class=&mi&&0&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p&&],[&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span&&span class=&mi&&0&/span&&span class=&p&&],[&/span&&span class=&mi&&0&/span&&span class=&p&&,&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span&&span class=&mi&&0&/span&&span class=&p&&],[&/span&&span class=&mi&&0&/span&&span class=&p&&,&/span&&span class=&mi&&0&/span&&span class=&p&&,&/span&&span class=&mi&&0&/span&&span class=&p&&]])&/span&
&span class=&n&&training_set_outputs&/span& &span class=&o&&=&/span& &span class=&n&&array&/span&&span class=&p&&([[&/span&&span class=&mi&&0&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&0&/span&&span class=&p&&,&/span&&span class=&mi&&0&/span&&span class=&p&&,&/span&&span class=&mi&&0&/span&&span class=&p&&,&/span&&span class=&mi&&0&/span&&span class=&p&&]])&/span&&span class=&o&&.&/span&&span class=&n&&T&/span&
&/code&&/pre&&/div&&p&同时也不要忘记调整&/p&&br&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&hidden_state&/span&&span class=&p&&,&/span& &span class=&n&&output&/span& &span class=&o&&=&/span& &span class=&n&&neural_network&/span&&span class=&o&&.&/span&&span class=&n&&think&/span&&span class=&p&&(&/span&&span class=&n&&array&/span&&span class=&p&&([&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&0&/span&&span class=&p&&]))&/span&
&/code&&/pre&&/div&&p&的测试样本。重新点击&b&运行&/b&,观察我们现在的神经网络能否解决问题。 &br&&/p&&p&&b&欢迎大家在网页内嵌的Python开发环境中调教神经网络(比如再加一层),如有问题欢迎在文章评论中留言或在&a href=&https://link.zhihu.com/?target=https%3A//jizhi.im/community& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&集智社区&/a&中发帖。&/b&&br&&/p&&br&&p&QQ群:&b&&/b&&br&&br&投稿或加入微信群:请联系客服,微信ID: &b&jizhi_im&/b&&/p&
原文:作者:翻译:Kaiser()代码可调版本: 前言在中,Milo带领我们用简单的9行代码搭建了一个最简单的神经网络——只有一…
&figure&&img src=&https://pic4.zhimg.com/v2-b956b444ac4e7f7603a5_b.jpg& data-rawwidth=&640& data-rawheight=&353& class=&origin_image zh-lightbox-thumb& width=&640& data-original=&https://pic4.zhimg.com/v2-b956b444ac4e7f7603a5_r.jpg&&&/figure&&p&原文:&a href=&https://link.zhihu.com/?target=https%3A//medium.com/deep-learning-101/how-to-generate-a-video-of-a-neural-network-learning-in-python-62f5c520e85c%23.ofjfaksi7& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Video of a neural network learning&/a&&/p&&p&作者:&a href=&https://link.zhihu.com/?target=https%3A//medium.com/%40miloharper%3Fsource%3Dpost_header_lockup& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Milo Spencer-Harper&/a&&/p&&p&翻译:Kaiser(&a href=&https://link.zhihu.com/?target=http%3A//weibo.com/kaiser0730& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&王司图&/a&)&/p&&br&&br&&p&网上的很多示例都用矩阵表示神经网络,这个方法好得很,因为:&/p&&ul&&li&数学意义明确&/li&&li&计算效率高&/li&&/ul&&p&但是这让读者很难真正理解过程中究竟发生了什么,从学习的角度来讲,如果能真正看到神经网络的内部,便是极好的。&/p&&p&接下来的视频演示了神经网络如何求解下表里的模式,你能看穿吗?&/p&&figure&&img src=&https://pic3.zhimg.com/v2-32dfdb9b467ecb39d26b3ad152b4e8d2_b.jpg& data-rawwidth=&682& data-rawheight=&436& class=&origin_image zh-lightbox-thumb& width=&682& data-original=&https://pic3.zhimg.com/v2-32dfdb9b467ecb39d26b3ad152b4e8d2_r.jpg&&&/figure&&br&&p&这就是&a href=&https://link.zhihu.com/?target=https%3A//jizhi.im/blog/post/nn_py_ml& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&前文&/a&中的例题:第三列是无关的,而前两列呈异或关系。所以正确答案是0。&/p&&p&我们的神经网络会遍历这七个样本60000次,为了便于展示我只截取了其中13次,每帧暂停一秒。为什么是13?因为这样视频刚好和BGM一样长。&/p&&p&每次她(&b&Kaiser:指神经网络,作者把网络娘化了&/b&)思考训练集里的一个样本时,你会看到她在思考(她的神经元和突触连接如何发光)。之后她会计算误差(预期输出与实际输出之差),再把误差反向传播,调整突触连接。&/p&&p&绿色的突触表示正权重(流经这一突触的信号将会激活下一个神经元)。红色的表示负权重(信号会使下一个神经元熄灭)。突触越粗,连接越强(绝对值大)。&/p&&br&&figure&&img src=&https://pic3.zhimg.com/v2-0c792b2cc838bee2aecd644defc0e525_b.jpg& data-rawwidth=&940& data-rawheight=&440& class=&origin_image zh-lightbox-thumb& width=&940& data-original=&https://pic3.zhimg.com/v2-0c792b2cc838bee2aecd644defc0e525_r.jpg&&&/figure&&br&&p&刚开始,突触权重是随机分配的,突触有绿(正)有红(负),对计算正确答案有帮助的突触,会随着时间的推移而加强。但是如果一个突出没有贡献的话,就会被削弱。甚至一个突触可能会从正变为负,或反之。一个例子就是输出神经元的第一个突触——在视频开始不久即由红转绿。刚开始她的脑子是这样的:&br&&/p&&figure&&img src=&https://pic2.zhimg.com/v2-01a9cc21f7dc847b8816110_b.jpg& data-rawwidth=&800& data-rawheight=&600& class=&origin_image zh-lightbox-thumb& width=&800& data-original=&https://pic2.zhimg.com/v2-01a9cc21f7dc847b8816110_r.jpg&&&/figure&&p&注意到了吗?她的所有神经元都是暗的。这是因为她现在什么都没有想,每个神经元右侧的数字就代表其活跃程度,数值在0-1之间。好的,现在她开始思考之前我们所见的模式了,请仔细观察视频,看神经突触是怎样随着学习过程变粗的。&/p&&br&&br&&br&&ul&&li&&b&视频播放地址:请点击 -& &a href=&https://link.zhihu.com/?target=https%3A//jizhi.im/community/discuss/-1-56-30-pm& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&这里&/a&&/b&&br&&/li&&/ul&&br&&p&你可能注意到我在视频的早期进行了慢放,只跳过了很少的迭代。当我第一次录制的时候并非如此。而是后来我发现,深度学习也遵从“边际效益递减”法则。神经网络在训练开始阶段,变化更快,所以我特地放慢了。&/p&&p&现在她已经用7个训练样本学会了模式,让我们再测试一下她的脑筋。你会发现 有些突触之间是此消彼长的。比如说,你还记得训练集第三列是跟答案无关的吗?对了,我们的神经网络她也发现了这一点,因为从第三个输入发出的突触变得非常细。&/p&&figure&&img src=&https://pic2.zhimg.com/v2-4edf4abffffb5c0a1b1c2e035dbe3437_b.jpg& data-rawwidth=&800& data-rawheight=&600& class=&origin_image zh-lightbox-thumb& width=&800& data-original=&https://pic2.zhimg.com/v2-4edf4abffffb5c0a1b1c2e035dbe3437_r.jpg&&&/figure&&br&&p&现在让她思考一个新数据[1, 1, 0],可见他的神经通路亮起来了。&br&&/p&&figure&&img src=&https://pic3.zhimg.com/v2-cdda880c1ddb7_b.jpg& data-rawwidth=&800& data-rawheight=&600& class=&origin_image zh-lightbox-thumb& width=&800& data-original=&https://pic3.zhimg.com/v2-cdda880c1ddb7_r.jpg&&&/figure&&p&她做出了0.01的预测,而正确答案是0,已经肥肠接近了!&/p&&p&我看好得很嘛。传统的计算机程序是学不会这一点的,不过神经网络却能学习并适应新形势,就像人的心智一样。&/p&&p&可视化是怎么做的呢?我用了一个Python库叫&a href=&https://link.zhihu.com/?target=http%3A//matplotlib.org/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&matplotlib&/a&,该库提供了绘图和动画的方法,发光效果则是通过调整alpha透明度实现的。&/p&&br&&br&&br&&p&数据科学讨论QQ群:&strong&&/strong&&br&&/p&&p&加入微信群/合作投稿:请添加客服微信:&strong&jizhi_im&/strong&&/p&
原文:作者:翻译:Kaiser() 网上的很多示例都用矩阵表示神经网络,这个方法好得很,因为:数学意义明确计算效率高但是这让读者很难真正理解过程中究竟发生了什么,从学习的角度来讲,如果能…
&figure&&img src=&https://pic2.zhimg.com/v2-8ddaf11f8d30e316dc72e530dc496fd8_b.jpg& data-rawwidth=&1280& data-rawheight=&800& class=&origin_image zh-lightbox-thumb& width=&1280& data-original=&https://pic2.zhimg.com/v2-8ddaf11f8d30e316dc72e530dc496fd8_r.jpg&&&/figure&&p&原文:&a href=&http://link.zhihu.com/?target=https%3A//medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f2.7np22hkhc& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&How to build a simple neural network in 9 lines of Python code&/a&&/p&&p&作者:&a href=&http://link.zhihu.com/?target=https%3A//medium.com/%40miloharper%3Fsource%3Dpost_header_lockup& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Milo Spencer-Harper&/a&&/p&&p&翻译:Kaiser(&a href=&http://link.zhihu.com/?target=http%3A//weibo.com/kaiser0730& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&王司图&/a&) &/p&&p&代码可调版本:&a href=&http://link.zhihu.com/?target=https%3A//jizhi.im/blog/post/nn_py_9& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&9行Python代码搭建神经网络 - 集智专栏&/a&&br&&/p&&h2&前言&/h2&&p&在&a href=&http://link.zhihu.com/?target=https%3A//jizhi.im/blog/post/how_to_create_mind& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&上一篇&/a&文章中,Milo给自己定下了两个小目标:&/p&&ol&&li&学习层次隐式马尔可夫模型&/li&&li&用Python搭建神经网络&/li&&/ol&&p&本文讲的就是他如何实现第二个目标。当然,这里的“用Python”指的就是不用那些现成的神经网络库比如Keras、Tensorflow等,否则连9行都不用了。&/p&&br&&h2&正文&/h2&&p&&figure&&img src=&https://pic1.zhimg.com/v2-67394baf6d8a0caa9e49efd0_b.jpg& data-rawwidth=&1326& data-rawheight=&806& class=&origin_image zh-lightbox-thumb& width=&1326& data-original=&https://pic1.zhimg.com/v2-67394baf6d8a0caa9e49efd0_r.jpg&&&/figure&(Kaiser: 程序的输出结果可在&a href=&http://link.zhihu.com/?target=https%3A//jizhi.im/blog/post/nn_py_9& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&集智专栏&/a&文章中点击&b&运行&/b&按钮在线执行,也可以自由编辑代码,重新定义神经网络。)&/p&&p& 本文我会解释这个神经网络是怎样炼成的,所以你也可以搭建你自己的神经网络。也会提供一个加长版、但是也更漂亮的源代码。&/p&&p&不过首先,什么是神经网络?人脑总共有超过千亿个神经元细胞,通过神经突触相互连接。如果一个神经元被足够强的输入所激活,那么它也会激活其他神经元,这个过程就叫“思考”。&/p&&p&我们可以在计算机上创建神经网络,来对这个过程进行建模,且并不需要模拟分子级的生物复杂性,只要观其大略即可。为了简化起见,我们只模拟一个神经元,含有三个输入和一个输出。&/p&&p&&figure&&img src=&https://pic4.zhimg.com/v2-65af197f1b_b.jpg& data-rawwidth=&800& data-rawheight=&800& class=&origin_image zh-lightbox-thumb& width=&800& data-original=&https://pic4.zhimg.com/v2-65af197f1b_r.jpg&&&/figure&我们将训练这个神经元来解决下面这个问题,前四个样本叫作“训练集”,你能求解出模式吗?&?&处应该是0还是1呢?&/p&&figure&&img src=&https://pic2.zhimg.com/v2-f30043daae2f7f33a8c5f5_b.jpg& data-rawwidth=&676& data-rawheight=&318& class=&origin_image zh-lightbox-thumb& width=&676& data-original=&https://pic2.zhimg.com/v2-f30043daae2f7f33a8c5f5_r.jpg&&&/figure&&p&或许你已经发现了,输出总是与第一列的输入相等,所以?应该是1。&/p&&h3&训练过程&/h3&&p&问题虽然很简单,但是如何教会神经元来正确的回答这个问题呢?我们要给每个输入赋予一个权重,权重可能为正也可能为负。权重的绝对值,代表了输入对输出的决定权。在开始之前,我们先把权重设为随机数,再开始训练过程:&/p&&ol&&li&&p&从训练集样本读取输入,根据权重进行调整,再代入某个特殊的方程计算神经元的输出。&/p&&/li&&li&&p&计算误差,也就是神经元的实际输出和训练样本的期望输出之差。&/p&&/li&&li&&p&根据误差的方向,微调权重。&/p&&/li&&li&&p&重复10000次。&/p&&/li&&/ol&&p&最终神经元的权重会达到训练集的最优值。如果我们让神经元去思考一个新的形势,遵循相同过程,应该会得到一个不错的预测。&/p&&h3&计算神经元输出的方程&/h3&&p&你可能会好奇,计算神经元输出的人“特殊方程”是什么?首先我们取神经元输入的加权总和:&/p&&img src=&http://www.zhihu.com/equation?tex=%5Csum+weight_i+%5Ccdot+input_i+%3D+weight_1+%5Ccdot+input_1+%2B+weight_2+%5Ccdot+input_2+%2B+weight_3+%5Ccdot+input_3& alt=&\sum weight_i \cdot input_i = weight_1 \cdot input_1 + weight_2 \cdot input_2 + weight_3 \cdot input_3& eeimg=&1&&&p&接下来我们进行正规化,将结果限制在0和1之间。这里用到一个很方便的函数,叫Sigmoid函数:&/p&&img src=&http://www.zhihu.com/equation?tex=%5Cfrac%7B1%7D%7B1%2Be%5E%7B-x%7D%7D& alt=&\frac{1}{1+e^{-x}}& eeimg=&1&&&p&如果绘出图像,Sigmoid函数是S形的曲线:&/p&&figure&&img src=&https://pic3.zhimg.com/v2-3d4e201bc3c992_b.jpg& data-rawwid}

我要回帖

更多关于 ai导出jpg内存不足 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信