社会焦点

如何用TensorFlow构建RNN?这里有一份极简的教程(3)

字号+ 作者: 来源: 2017-04-30

from__future__ importprint_function, pision importnumpy asnp importtensorflow astf importmatplotlib.pyplot aspltnum_epochs = 100total_series_length = 50000truncated_backprop_length = 15state_size = 4

  from__future__ importprint_function, pision importnumpy asnp importtensorflow astf importmatplotlib.pyplot aspltnum_epochs = 100total_series_length = 50000truncated_backprop_length = 15state_size = 4num_classes = 2echo_step = 3batch_size = 5num_batches = total_series_length//batch_size//truncated_backprop_length defgenerateData():x = np.array(np.random.choice( 2, total_series_length, p=[ 0.5, 0.5])) y = np.roll(x, echo_step) y[ 0:echo_step] = 0x = x.reshape((batch_size, - 1)) # The first index changing slowest, subseries as rowsy = y.reshape((batch_size, - 1)) return(x, y)batchX_placeholder = tf.placeholder(tf.float32, [batch_size, truncated_backprop_length])batchY_placeholder = tf.placeholder(tf.int32, [batch_size, truncated_backprop_length])init_state = tf.placeholder(tf.float32, [batch_size, state_size])W = tf.Variable(np.random.rand(state_size+ 1, state_size), dtype=tf.float32)b = tf.Variable(np.zeros(( 1,state_size)), dtype=tf.float32)W2 = tf.Variable(np.random.rand(state_size, num_classes),dtype=tf.float32)b2 = tf.Variable(np.zeros(( 1,num_classes)), dtype=tf.float32) # Unpack columnsinputs_series = tf.unpack(batchX_placeholder, axis= 1)labels_series = tf.unpack(batchY_placeholder, axis= 1) # Forward passcurrent_state = init_statestates_series = [] forcurrent_input ininputs_series: current_input = tf.reshape(current_input, [batch_size, 1]) input_and_state_concatenated = tf.concat( 1, [current_input, current_state]) # Increasing number of columnsnext_state = tf.tanh(tf.matmul(input_and_state_concatenated, W) + b) # Broadcasted additionstates_series.append(next_state) current_state = next_statelogits_series = [tf.matmul(state, W2) + b2 forstate instates_series] #Broadcasted additionpredictions_series = [tf.nn.softmax(logits) forlogits inlogits_series]losses = [tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels) forlogits, labels inzip(logits_series,labels_series)]total_loss = tf.reduce_mean(losses)train_step = tf.train.AdagradOptimizer( 0.3).minimize(total_loss) defplot(loss_list, predictions_series, batchX, batchY):plt.subplot( 2, 3, 1) plt.cla() plt.plot(loss_list) forbatch_series_idx inrange( 5): one_hot_output_series = np.array(predictions_series)[:, batch_series_idx, :] single_output_series = np.array([( 1ifout[ 0] < 0.5else0) forout inone_hot_output_series]) plt.subplot( 2, 3, batch_series_idx + 2) plt.cla() plt.axis([ 0, truncated_backprop_length, 0, 2]) left_offset = range(truncated_backprop_length) plt.bar(left_offset, batchX[batch_series_idx, :], width= 1, color= "blue") plt.bar(left_offset, batchY[batch_series_idx, :] * 0.5, width= 1, color= "red") plt.bar(left_offset, single_output_series * 0.3, width= 1, color= "green") plt.draw() plt.pause( 0.0001) withtf.Session() assess: sess.run(tf.initialize_all_variables()) plt.ion() plt.figure() plt.show() loss_list = [] forepoch_idx inrange(num_epochs): x,y = generateData() _current_state = np.zeros((batch_size, state_size)) print( "New data, epoch", epoch_idx) forbatch_idx inrange(num_batches): start_idx = batch_idx * truncated_backprop_length end_idx = start_idx + truncated_backprop_length batchX = x[:,start_idx:end_idx] batchY = y[:,start_idx:end_idx] _total_loss, _train_step, _current_state, _predictions_series = sess.run( [total_loss, train_step, current_state, predictions_series], feed_dict={ batchX_placeholder:batchX, batchY_placeholder:batchY, init_state:_current_state }) loss_list.append(_total_loss) ifbatch_idx% 100== 0: print( "Step",batch_idx, "Loss", _total_loss) plot(loss_list, _predictions_series, batchX, batchY)plt.ioff()plt.show() 招聘

  我们正在招募编辑记者、运营等岗位,工作地点在北京中关村,期待你的到来,一起体验人工智能的风起云涌。

  相关细节,请在公众号对话界面,回复:“招聘”两个字。

  One More Thing…

  今天AI界还有哪些事值得关注?在量子位(QbitAI)公众号会话界面回复“今天”,看我们全网搜罗的AI行业和研究动态。笔芯~

  追踪人工智能领域最劲内容

相关阅读:

  • .size 和.length的区别
  • rnn时间序列预测模型
  • tensorflow tf.nn.rnn_cell
  • 用rnn做推荐
  • rnn做词语预测
  • rnn深度学习案例
  • lstm rnn的损失函数
  • rnn 文本处理
  • 基于rnn生成古诗词
  • rnn模型训练测试教程
  • caffe batch_size
  • rnn中文输入
  • 相关推荐:

  • 华为史上最美操作系统,你绝对不能错过的EMUI5.0
  • 国产操作系统典范:deepin操作系统
  • 娱乐办公两不误!这个笔记本能把屏幕拔下来写字
  • 斗鱼响应新规加强监管,坚持打造优质精品直播
  • SpaceX 火箭爆炸原因确定:液态氧过冷成了固态
  • 华为Mate9中国版真机秀 你绝对没发现它有两种版本
  • 99%的人都不知道的微信高效使用术?
  • 乐视网一周蒸发88亿元 贾跃亭反思节奏发展过快
  • 似乎已经战胜传统渠道的小米 今年为什么被OPPO、vivo 打败?
  • 优雅商务风,性能一鸣惊人—TCL 950体验评测
  • 转载请注明出处。


    1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,请转载时务必注明文章作者和来源,不尊重原创的行为我们将追究责任;3.作者投稿可能会经我们编辑修改或补充。

    相关文章