TensorFlow 2.0初体验:功能与特性介绍

TF2.0默认为动态图,即eager模式。意味着TF能像Pytorch一样不用在session中才能输出中间参数值了,那么动态图和静态图毕竟是有区别的,tf2.0也会有写法上的变化。不过值得吐槽的是,tf2.0启动速度仍然比Pytorch慢的多。



操作被记录在磁带中(tape)

这是一个关键的变化。在TF0.x到TF1.X时代,操作(operation)被加入到Graph中。但现在,操作会被梯度带记录,我们要做的仅仅是让前向传播和计算损失的过程发生在梯度带的上下文管理器中。

with tf.GradientTape() as tape:         logits = mnist_model(images, training=True)         
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)             
#  loss_value 必须在tape内部      grads = tape.gradient(loss_value, mnist_model.variables)     
optimizer.apply_gradients(zip(grads, mnist_model.variables),                             
global_step=tf.train.get_or_create_global_step())1.


注意到这里的tape.gradient用来计算损失函数和model参数的导数。我们在之前的版本要么使用优化器的minimize功能,要么使用tf.gradients来计算导数。在eager模式,tf.gradients不能使用。

# coding: utf-8  # pytorch: loss.backward(), optimizer.step()完成梯度计算和参数更新; 
# tf2.0通过: grads = tape.gradient(), optimizer.apply_gradients()来实现! 
# reference: https://github.com/aymericdamien/TensorFlow-Examples/blob/master/tensorflow_v2/notebooks/
3_NeuralNetworks/convolutional_network.ipynb  from __future__ import absolute_import,division,print_function  
import tensorflow as tf from tensorflow.keras import Model, layers import numpy as np   
# MNIST dataset parameters. num_classes = 10 # total classes (0-9 digits).  
# Training parameters. learning_rate = 0.001 training_steps = 200 batch_size = 128 display_step = 10  
# Network parameters. conv1_filters = 32 # number of filters for 1st conv layer. conv2_filters = 64 
# number of filters for 2nd conv layer. fc1_units = 1024 # number of neurons for 1st fully-connected layer.   
# Prepare MNIST data. from tensorflow.keras.datasets import mnist (x_train, y_train), 
(x_test, y_test) = mnist.load_data() # Convert to float32. x_train, x_test = np.array(x_train, np.float32), 
np.array(x_test, np.float32) # Normalize images value from [0, 255] to [0, 1]
. x_train, x_test = x_train / 255., x_test / 255.  # Use tf.data API to shuffle and batch data
. train_data = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_data = train_data
.repeat().shuffle(5000).batch(batch_size).prefetch(1)   # Create TF Model. class ConvNet(Model):     
# Set layers.     def __init__(self):         super(ConvNet, self).__init__()         
# Convolution Layer with 32 filters and a kernel size of 5.         self.conv1 = layers.Conv2D(32, 
kernel_size=5, activation=tf.nn.relu)         # Max Pooling (down-sampling) with kernel size of 2 
and strides of 2.         self.maxpool1 = layers.MaxPool2D(2, strides=2)          
# Convolution Layer with 64 filters and a kernel size of 3.         self
.conv2 = layers.Conv2D(64, kernel_size=3, activation=tf.nn.relu)         
# Max Pooling (down-sampling) with kernel size of 2 and strides of 2.         
self.maxpool2 = layers.MaxPool2D(2, strides=2)          
# Flatten the data to a 1-D vector for the fully connected layer.         
self.flatten = layers.Flatten()          # Fully connected layer.         
self.fc1 = layers.Dense(1024)         
# Apply Dropout (if is_training is False, dropout is not applied).         
self.dropout = layers.Dropout(rate=0.5)          # Output layer, class prediction.         
self.out = layers.Dense(num_classes)      
# Set forward pass.     def call(self, x, is_training=False):         
x = tf.reshape(x, [-1, 28, 28, 1])         x = self.conv1(x)         x = self.maxpool1(x)         
x = self.conv2(x)         x = self.maxpool2(x)         x = self.flatten(x)         x = self.fc1(x)         
x = self.dropout(x, training=is_training)         x = self.out(x)         if not is_training:             
# tf cross entropy expect logits without softmax, so only             
# apply softmax when not training.             x = tf.nn.softmax(x)         return x  
# Build neural network model. conv_net = ConvNet()   # Cross-Entropy Loss. 
# Note that this will apply 'softmax' to the logits. def cross_entropy_loss(x, y):     
# Convert labels to int 64 for tf cross-entropy function.     y = tf.cast(y, tf.int64)     
# Apply softmax to logits and compute cross-entropy.     
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=x)     
# Average loss across the batch.     return tf.reduce_mean(loss)  
# Accuracy metric. def accuracy(y_pred, y_true):     
# Predicted class is the index of highest score in prediction vector (i.e. argmax).     
correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.cast(y_true, tf.int64))     
return tf.reduce_mean(tf.cast(correct_prediction, tf.float32), axis=-1)  
# Stochastic gradient descent optimizer. optimizer = tf.optimizers.Adam(learning_rate)   
# Optimization process. def run_optimization(x, y):     
# Wrap computation inside a GradientTape for automatic differentiation.     
with tf.GradientTape() as g:         # Forward pass.         pred = conv_net(x, is_training=True)         
# Compute loss.         loss = cross_entropy_loss(pred, y)      
# Variables to update, i.e. trainable variables.     trainable_variables = conv_net.trainable_variables
# Compute gradients.     gradients = g.gradient(loss, trainable_variables)      
# Update W and b following gradients.     optimizer.apply_gradients(zip(gradients, trainable_variables))   
# Run training for the given number of steps. for step, (batch_x, batch_y) in enumerate(train_data
.take(training_steps), 1):     # Run the optimization to update W and b values.     
run_optimization(batch_x, batch_y)      if step % display_step == 0:         
pred = conv_net(batch_x)         loss = cross_entropy_loss(pred, batch_y)         
acc = accuracy(pred, batch_y)         print("step: %i, loss: %f, accuracy: %f" % (step, loss, acc))    
# Test model on validation set. pred = conv_net(x_test) print("Test Accuracy: %f" % accuracy(pred, y_test))1.



注意:

- TF2.0默认为动态图,没有回话Session了;

- 代码中注意 `for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps), 1):` 的使用;

- Pycharm中注意:from tensorflow.keras import Model, layers ;跳不进去查看内部实现;用面向对象的思想写网络结构;init,build,call等函数实现;

C/C++基本语法学习 STL C++ primer


免责声明:本文系网络转载或改编,未找到原创作者,版权归原作者所有。如涉及版权,请联系删

相关推荐
技术文档
软件下载
QR Code
微信扫一扫,欢迎咨询~

联系我们
武汉格发信息技术有限公司
湖北省武汉市经开区科技园西路6号103孵化器
电话:155-2731-8020 座机:027-59821821
邮件:tanzw@gofarlic.com
Copyright © 2023 Gofarsoft Co.,Ltd. 保留所有权利
遇到许可问题?该如何解决!?
评估许可证实际采购量? 
不清楚软件许可证使用数据? 
收到软件厂商律师函!?  
想要少购买点许可证,节省费用? 
收到软件厂商侵权通告!?  
有正版license,但许可证不够用,需要新购? 
联系方式 155-2731-8020
预留信息,一起解决您的问题
* 姓名:
* 手机:

* 公司名称:

姓名不为空

手机不正确

公司不为空