Traditional Culture Encyclopedia - Weather inquiry - How to write tensorflow in python
How to write tensorflow in python
TensorFlow is not a pure neural network framework, but a framework for numerical analysis using data flow diagrams.
TensorFlow uses directed graphs to represent computing tasks. The nodes of the graph are called ops (operations) to represent data processing, and the edge flow of the graph describes the flow direction of data.
The calculation process of this framework is to deal with the flow composed of TensorFlow, which is the origin of the name tensorflow.
TensorFlow uses tensorflow to represent data. Tensor means tensor, that is, high-dimensional array, which is represented by numpy.ndarray in python
TensorFlow uses sessions to execute graphics and variables to maintain state. Tf.constant is an output-only ops, which is often used as a data source.
Let's build a simple graph with only two constant as inputs, and then perform matrix multiplication:
Import session, device, constant, matmul'' from tensor stream to build a simple graph with only two constants as input and then matrix multiplication:'' # If you don't use the with session () statement, you need to manually execute session.close ().
#with device device specifies the device that performs the calculation:
# "/cpu:0 ":cpu of the machine.
# "/gpu:0 ":The first gpu of the machine, if any.
# "/gpu: 1 ":the second gpu of the machine, and so on.
Use session () as session:? # Create the context of the execution diagram
Use device ('/cpu:0'):? # Specify a computing device
Mat 1 = constant ([[3,3]])? # Create source node
Mat2 = constant ([[2], [2]])
Product = matmul (mat 1, mat2) # Specify the front node of the node and create a graph.
Result = session.run(product) # Perform calculation printing (result)123456789101314151566.
Let's make a counter with variables:
Import sessions, constants, variables, add, assign values, and initialize all variables from tensorflow.
State = variable (0,name = ' counter ')# Create counter one = constant( 1)# Create data source: 1 Val = add(state, one)# Create new value node Update = assign(state,Val)# Update counter setup = Initialize _ all _ variables()# Initialize variable with Session()as Session:
Session.run(setup) # performs initialization.
Print(session.run(state)) # outputs the initial value.
For I in the range (3):
Session.run(update) # performs the update.
Print(session.run(state)) # Output counter value12345678910113.
Before using variables, you must run the graph returned by initialize_all_variables (), and running the variable node will return the value of the variable.
In this example, the process of building the diagram is out of context, and no running device is specified.
In the above example, session.run only accepts an op as a parameter, but in fact, run can accept an op list as input:
session.run([op 1,op2]) 1
The above example always uses constant as the data source, and the feed can dynamically input data at runtime:
Import session from tensorflow, placeholder, mul, float32.
Input 1 = placeholder (float32)
Input2 = placeholder (float32)
Output = mul(input 1, input2), and Session () is session: printsession.run (output, feed_dict={input 1: [3], input2: [2]}) 1234567.
Realize a simple neural network
Neural network is a widely used machine learning model. For the principle of neural network, you can refer to this short article or experience an online demonstration at tensorflow amusement park.
First, define a BPNeuralNetwork class:
BPNeuralNetwork class:
def __init__(self):
self.session = tf。 Session ()
Self.input_layer = none.
Self.label_layer = none.
Self-loss = none
Self-coaching = None
self.layers = [] def __del__(self):
self . session . close() 123456789 10 1 1
Write a function to generate a single-layer neural network, and each layer of neurons is represented by a data flow diagram. A variable matrix is used to represent the connection weight with the pre-neuron, and another variable vector is used to represent the offset value, and an excitation function is set for this layer.
Def make_layer (input, input size, output size, activation = none):
Weight = tf. Variable (tf.random_normal([in_size, out_size]).
Basis = tf. Variable (tf.zeros([ 1, out_size])+0. 1)
Result = tf.matmul(inputs, weights)+basis If activate is None: return the result; otherwise: return activate (result)12345678.
Use placeholders as input layers.
self . input _ layer = TF . placeholder(TF . float 32,[None,2]) 1
The second parameter of the placeholder is the shape of the tensor. [None, 1] represents a two-dimensional array with unlimited rows and columns, which has the same meaning as numpy.array.shape, where self.input_layer is defined as an input layer that accepts two-dimensional input.
Placeholders are also used to indicate the labels of training data:
self . label _ layer = TF . placeholder(TF . float 32,[None, 1]) 1
Use make_layer to define two hidden layers for the neural network, and use the last layer as the output layer:
self . loss = TF . reduce _ mean(TF . reduce _ sum(TF . square((self . label _ layer-self . layers[ 1])),reduction _ indexes =[ 1])) 1
Tf.train provides some optimizers that can be used to train neural networks. To minimize the loss function:
self . trainer = TF . train . gradiendescentoptimizer(learn _ rate)。 Minimize (self-loss) 1
Running a neural network model using a session:
initer = TF . initialize _ all _ variables()# do training self . session . run(initer)
For I (limit) in range:
self.session.run(self.trainer,feed _ dict = { self . input _ layer:cases,self . label _ layer:labels }) 12345
Use the trained model to predict:
self . session . run(self . layers[- 1],feed _ dict = { self . input _ layer:case }) 1
Complete code:
Import tensorflow as tfimport numpy as npdef make_layer (inputs, in_size, out_size, activate=None):
Weight = tf. Variable (tf.random_normal([in_size, out_size]).
Basis = tf. Variable (tf.zeros([ 1, out_size])+0. 1)
Result = tf.matmul(inputs, weights)+basis If activate is None: return the result; otherwise: return the activate(result) class BPNeuralNetwork:
def __init__(self):
self.session = tf。 Session ()
Self.input_layer = none.
Self.label_layer = none.
Self-loss = none
Self.optimizer = none.
self.layers = [] def __del__(self):
Self.session.close () defines training (self, cases, labels, limit= 100, learn_rate=0.05):
# Build a network
self . input _ layer = TF . placeholder(TF . float 32,[None,2])
self . label _ layer = TF . placeholder(TF . float 32,[None, 1])
self . layers . append(make _ layer(self . input _ layer,2, 10,activate=tf.nn.relu))
self . layers . append(make _ layer(self . layers[0], 10,2,activate=None))
self . loss = TF . reduce _ mean(TF . reduce _ sum(TF . square((self . label _ layer-self . layers[ 1])),reduction _ indexes =[ 1]))
self . optimizer = TF . train . gradiendescentoptimizer(learn _ rate)。 Minimize (self-loss)
Initer = tf.initialize _ all _ variables () # Do the training.
Self.session.run(initer) of I within the scope (limit):
self . session . run(self . optimizer,feed _ dict = { self . input _ layer:cases,self . label _ layer:labels })def predict(self,case):
return self . session . run(self . layers[- 1],feed _ dict = { self . input _ layer:case })def test(self):
x_data = np.array([[0,0],[0, 1],[ 1,0],[ 1, 1]])
y_data = np.array([[0, 1, 1,0]])。 Transpose ()
test_data = np.array([[0, 1]])
self.train(x_data,y_data)
Print (self-prediction (test data))
nn = BPNeuralNetwork()
nn . test() 123456789 10 1 12 13 14 16 17 18 19202 1222324252628
Although the above model is simple, it is not flexible to use. In the same way, the author realizes a network with self-defined input and output dimensions and multi-layer hidden neurons. See dynamic_bpnn.py
Import tensorflow as tfimport numpy as npdef make_layer (inputs, in_size, out_size, activate=None):
Weight = tf. Variable (tf.random_normal([in_size, out_size]).
Basis = tf. Variable (tf.zeros([ 1, out_size])+0. 1)
Result = tf.matmul(inputs, weights)+basis If activate is None: return the result; otherwise: return the activate(result) class BPNeuralNetwork:
def __init__(self):
self.session = tf。 Session ()
Self-loss = none
Self.optimizer = none.
self.input_n = 0
self.hidden_n = 0
self.hidden_size = []
self.output_n = 0
Self.input_layer = none.
self.hidden_layers = []
Self.output_layer = none.
Self.label_layer = none.
def __del__(self):
self . session . close()def setup(self,ni,nh,no):
# Set the number of parameters
self.input_n = ni
self.hidden_n = len(nh)? # Hide layers
self.hidden_size = nh? # Number of cells in each hidden layer
Self.output_n = no # Build the input layer.
Self.input _ layer = tf.placeholder (tf.float32, [none, self.input _ n]) # Build a label layer.
Self.label _ layer = tf.placeholder (tf.float32, [none, self.output _ n]) # Create a hidden layer.
in_size = self.input_n
out_size = self.hidden_size[0]
Input = itself. Input _ layer
self . hidden _ layers . append(make _ layer(inputs,in_size,out_size,activate = TF . nn . relu))for I in range(self . hidden _ n- 1):
in_size = out_size
out _ size = self . hidden _ size[I+ 1]
inputs = self . hidden _ layers[- 1]
Self.hidden _ layers.append (make _ layer (inputs, in _ size, out _ size, activate = tf.nn.relu)) # builds the output layer.
self . output _ layer = make _ layer(self . hidden _ layers[- 1],self.hidden_size[- 1],self.output_n) def train(self,cases,labels,limit= 100,learn_rate=0.05):
self . loss = TF . reduce _ mean(TF . reduce _ sum(TF . square((self . label _ layer-self . output _ layer)),reduction _ indexes =[ 1]))
self . optimizer = TF . train . gradiendescentoptimizer(learn _ rate)。 Minimize (self-loss)
Initer = tf.initialize _ all _ variables () # Do the training.
Self.session.run(initer) of I within the scope (limit):
self . session . run(self . optimizer,feed _ dict = { self . input _ layer:cases,self . label _ layer:labels })def predict(self,case):
return self . session . run(self . output _ layer,feed _ dict = { self . input _ layer:case })def test(self):
x_data = np.array([[0,0],[0, 1],[ 1,0],[ 1, 1]])
y_data = np.array([[0, 1, 1,0]])。 Transpose ()
test_data = np.array([[0, 1]])
Self-service setting (2, [10/0,5], 1)
self.train(x_data,y_data)
Print (self-prediction (test data))
nn = BPNeuralNetwork()
nn . test() 123456789 10 1 12 13 14 16 17 18 19202 1222324252628
- Previous article:About the three-day tour of Tengchong
- Next article:Ancient poems about Zhuzhou
- Related articles
- What should I do if I meet a black-hearted intermediary when renting a house?
- Do I need to bring a snow chain to Siguniang Mountain in February 65438?
- What are some short sentences about eating sea fishing and making friends?
- What mid-to high-end skin care brands are suitable for women in their thirties?
- What a dry weather!
- Is it better to forecast the weather 24 hours a day or on time?
- How to get to Nanjing Aquarium when it rains?
- Ordos climate environment
- Good words and sentences describing Confucius Temple
- How to write windy weather in composition