Traditional Culture Encyclopedia - Photography major - Why do programmers learn deep learning?
Why do programmers learn deep learning?
Deep learning itself is a very huge knowledge system. In this paper, we want to observe what deep learning means to programmers from the programmer's point of view, and how we can use such a rapidly developing discipline to help programmers improve their software development ability.
This paper is based on Fei's speech at the 20 16QCon Global Software Development Conference (Shanghai).
order
1973, the United States released a popular sci-fi movie "westworld", and three years later there was a sequel "Future World". This film was introduced to China in the early 1980s and is called Future World. That movie shocked me. There are many robots in the film, and there are integrated circuit boards under the expressive faces. This made me feel that the future world at that time was so far away and mysterious.
Now it's 20 16, and many friends may be watching the series "westworld" with the same theme, which was shot by HBO with huge investment. If the first two films are still limited to such topics as robots and artificial intelligence, the new drama 20 16 has made great breakthroughs in plot and artificial intelligence thinking. We are not rendering whether robots will threaten human beings, but discussing more philosophical issues such as "Dreamsaremainlymemories".
The topic "How does memory affect intelligence" is very worthy of our consideration, and it also gives us a good enlightenment-what kind of development and progress has been made in the field of artificial intelligence today.
The topic we are discussing today is not just simple artificial intelligence. If you are interested in deep learning, I believe you will search for similar related keywords on search engines. I got 26.3 million search results using deep learning as a keyword on Google. This figure is more than 3 million more than a week ago. This figure is enough to see the speed of the development of deep learning related content, and people pay more and more attention to deep learning.
From another angle, I want you to see how popular deep learning is in the market. From 20 1 1 to now, more than 140 startups focusing on artificial intelligence and deep learning have been acquired. In just 20 16 years, there were more than 40 such mergers and acquisitions.
The craziest of these is Google, which has acquired 1 1 artificial intelligence startups, the most famous of which is to beat DeepMind of Li Shishi Jiu Duan. Apple, Intel and Twitter are close behind. Take Intel as an example. This year alone, it acquired three startups, Itseez, Nervana and Movidius. This series of big mergers and acquisitions is to lay out the fields of artificial intelligence and deep learning.
When we search for the topic of deep learning, we often see some obscure terms, such as gradient descent, back propagation, convolutional neural network, restricted Boltzmann machine and so on.
Open any technical article and you will see all kinds of mathematical formulas. What you see on the left is not a high-level academic paper, but an introduction to Boltzmann machine by Wikipedia. Wikipedia is a popular science content, and its complexity exceeds the ability of most mathematical knowledge.
In this context, my topic today can be summarized as three points: first, why do you want to learn deep learning; Second, the key concept of deep learning is neural network, so what is neural network? Third, as programmers, when we want to become deep learning developers, what kind of toolbox do we need and where to start developing.
Why study deep learning?
First of all, why do you want to learn deep learning? In this market, there is no shortage of concepts and vocabulary of fashionable new technologies. What is the difference between deep learning? I like a metaphor once used by Andrew Ng very much.
He compared deep learning to a rocket. The most important part of this rocket is its engine. At present, in this field, the core of the engine is neural network. As we all know, rockets need fuel besides engines, so big data actually constitutes another important part of the whole rocket-fuel. In the past, when we talked about big data, we focused more on data storage and management capabilities, and these methods and tools were more about statistics and summary of past historical data.
For the unknown things in the future, these traditional methods can't help us draw predicted conclusions from big data. If we consider the combination of neural network and big data, we can see the true value and significance of big data. AndrewNg once said, "We believe that (deep learning represented by neural networks) is the shortest way for us to get the closest to artificial intelligence". This is one of the most important reasons why we should study deep learning.
Secondly, with the continuous improvement of our data processing and computing capabilities, artificial intelligence technology represented by deep learning has made rapid progress in performance compared with traditional artificial intelligence technology. This is mainly due to the achievements brought by the continuous development of computer and related industries in the past few decades. In the field of artificial intelligence, performance is another important reason why we choose deep learning.
This is a video released by NVIDIA this year, which tells about the application of deep learning in the unmanned field. We can see how far deep learning can be applied to autonomous driving after only 3000 miles of training. In the experiment carried out at the beginning of this year, this system does not have real intelligent ability, and often there are all kinds of scary situations, and in some cases it even needs manual intervention.
However, after 3000 miles of training, we have seen that in all kinds of complex road conditions, such as mountain roads, highways, mud, unmanned driving has a very amazing performance. Please note that this deep learning model has only been trained for a few months, 3000 miles.
If we continue to improve this model, how powerful this processing power will become. The most important technology in this scenario is undoubtedly deep learning. We can draw a conclusion that deep learning can provide us with powerful capabilities. If programmers have this technology, it will give each programmer more power.
A quick introduction of neural network
If we don't have any doubts about deep learning, then we will definitely care about what kind of knowledge I need to master to get me into this field. The most important key technology is "neural network". When it comes to "neural network", it is easy to confuse these two completely different concepts.
One is the biological neural network, and the second is the artificial intelligence neural network that we are going to talk about today. Maybe everyone here has friends who are engaged in artificial intelligence. When you ask him questions about neural networks, he will throw out many strange concepts and terms to make you sound foggy, and you can only flinch.
For the concept of artificial intelligence neural network, most programmers will feel a great distance from themselves. Because it's hard for someone to take the time to share with you what the nature of neural networks is. And the theories and concepts you read from books also help you find a clear and simple conclusion.
Today, let's look at what a neural network is from the programmer's point of view. I first learned the concept of neural network through a movie-1991Terminator 2. The leading actor Schwarzenegger has a line:
" mycpuisanural-net processor; Alearningcomputer。 My processor is a neural processing unit and a learning computer. Historically, human beings' exploration of their own intelligence is much earlier than the study of neural networks.
1852, because of an accidental mistake, an Italian scholar dropped a human head into a nitrate solution, thus gaining the first opportunity to pay attention to the neural network with the naked eye. This accident accelerated the exploration of the mystery of human intelligence and opened up the development of concepts such as artificial intelligence and neurons.
Does the development of biological neural network have anything to do with the neural network we are talking about today? The neural network we are talking about today has nothing to do with biological neural networks except some terms. It is completely a concept in the field of mathematics and computer, which is also a sign of the maturity of artificial intelligence. We should distinguish this point, and don't confuse the biological neural network with the artificial intelligence we are talking about today.
In the mid-1990s, Vapnik and others proposed the Support Vector Machine (SVM). Soon, this algorithm shows greater advantages than neural network in many aspects, such as no need to adjust parameters, high efficiency and global optimal solution. For these reasons, SVM algorithm quickly defeated the neural network algorithm and became the mainstream of that period. However, the study of neural network once again fell into the ice age.
In the abandoned decade, several scholars still insist on their research. One of the most important people is Professor Jeffrey Hinton of the University of Toronto in Canada. In 2006, he published a paper in the famous magazine Science, and put forward the concept of "deep belief network" for the first time.
Different from the traditional training methods, the "deep belief network" has a "pre-training" process, which can easily make the weights in the neural network find a value close to the optimal solution, and then use the "fine-tuning" technology to optimize the whole network. The application of these two technologies greatly reduces the time for training multilayer neural networks. In his paper, he gave a new term "deep learning" to the learning methods related to multilayer neural networks.
Soon, deep learning appeared in the field of speech recognition. Then at 20 12, deep learning technology made efforts in the field of image recognition. Hinton and his students successfully trained one million pictures with 1000 categories in ImageNet competition, and achieved a good classification error rate of 15%, which was nearly 1 1 percentage point higher than the second place.
This result fully proves the superiority of multi-layer neural network recognition effect. Since then, deep learning has opened a new golden age. We have seen the fiery development of deep learning and neural networks today, and it was from that time that it began to detonate.
Using neural network to construct classifier, what is the structure of this neural network?
In fact, this structure is very simple. We see that this diagram is a schematic diagram of a simple neural network. Neural network is essentially a "directed graph". Every node on the graph borrows biological terms and has a new term-"neuron". Directed lines (directed arcs) connecting neurons are regarded as "nerves". The neurons in this picture are not the most important, but the nerves connecting the neurons are the most important. Each nerve part has directivity, and each neuron will point to the next node.
Nodes are hierarchical, and each node points to the next node. The nodes at the same level are not connected, and cannot cross the superior nodes. Every arc has a value, which we usually call "weight". With weights, formulas can be used to calculate the values of the nodes they refer to. What is the weight value? We got the result through training. Their initial assignment often starts with random numbers, and then the result that is closest to the real value obtained by training is used as a model, which can be reused. This result is what we call a trained classifier.
Nodes are divided into input nodes and output nodes, and the middle is called hidden layer. Simply put, we have data input items, and there are different levels of neural networks in the middle, which is what we call the hidden layer. It is so called because these energy levels are invisible to us. The output results are also called output nodes, and the output nodes are limited, so are the input nodes. The hidden layer is the part of the model that we can design, which is the simplest concept of neural network.
If I make a simple analogy, I want to explain it with a four-layer neural network. On the left is the input node. We see that there are several input items, which may represent RGB values, tastes or other input data items of different apples. The hidden layer in the middle is the neural network we designed. This network now has different levels, and the weight between levels is a result of our continuous training.
The final output result is stored in the output node. Every time, like the flow direction, the nerve has a direction and makes different calculations through different layers. In the hidden layer, the input result of each node is calculated as the input item of the next layer, and the final result will be saved in the output node. The output value is closest to our classification. If a certain value is obtained, it will be classified into a certain category. This is a simple overview of using neural networks.
In addition to the structural diagram represented from left to right, there is also a common expression that represents a bottom-up neural network. At this point, the input layer is located at the bottom of the graph and the output layer is located at the top of the graph. From left to right, the literature of AndrewNg and LeCun is widely used. In the Caffe framework, expression is bottom-up.
Simply put, neural network is not mysterious, it is just a process of extracting and learning features by using the processing ability of graphs. Hinton's famous paper in 2006 summarized deep learning into three most important elements: calculation, data and model. With these three points, a deep learning system can be realized.
Toolbox needed by programmers
For programmers, mastering theoretical knowledge is for better programming practice. Then let's see what tools programmers need to prepare for deep learning practice.
Hardware appliance
In terms of hardware, the first thing we think of is CPU, the computing power we may need. In addition to the usual CPU architecture, there are CPUs with multipliers to improve computing power. In addition, there will be DSP application scenarios in different fields, such as handwriting recognition, speech recognition and other special signal processors. The other is GPU, which is a hot field of deep learning application at present. The last category is FPGA (programmable logic gate array).
These four methods have their own advantages and disadvantages, and each product will be very different. Comparatively speaking, although CPU is weak in computing power, it is good at management and scheduling, such as reading data, managing files, human-computer interaction and so on. And its tools are also very rich. Compared with DSP, the management ability is weak, but the specific computing ability is strengthened.
Both of them rely on high frequency to solve the calculation problem, and are suitable for algorithms with large recursive computation and inconvenient splitting. GPU has weak management ability, but strong computing ability. However, because there are many computing units, it is more suitable for the algorithm of whole block data streaming.
FPGA is very strong in management and operation processing, but the development cycle is long, so it is difficult to develop complex algorithms. In real-time, FPGA is the highest. Judging from the current development, for ordinary programmers, the commonly used computing resources in reality are still CPU and GPU, among which GPU is the hottest field.
This is an example of p2 on AWS that I prepared for this sharing the day before yesterday. The instance update, driver installation and environment setting are completed in only a few commands, and the total time for resource creation and setting is about 10 minutes. Before, I spent two days installing and debugging the computer mentioned above.
In addition, we can also make a comparison from the cost. The cost of the p2.8xLarge instance is $7.20 per hour. What is the total cost of my own computer? 16904 yuan. This fee is enough for me to use p2.8x large for more than 350 hours. Using AWS deep learning station for one year can offset all my efforts. With the continuous upgrading of technology, I can constantly upgrade my examples and get more and more processing resources with limited cost. This is actually the value of cloud computing.
What is the relationship between cloud computing and deep learning? On August 8 this year, an article was published on IDG website to talk about this topic. This article predicts that if the parallel ability of deep learning continues to improve and the processing ability provided by cloud computing continues to develop, the combination of the two may produce a new generation of deep learning, which will bring greater impact and influence. This is a direction that needs everyone's consideration and attention!
software
In addition to the basic environment of hardware, deep learning. Programmers will be more concerned about software resources related to development. Here I list some used software frameworks and tools.
Scikit-learn is the most popular Python machine learning library. It has the following attractive characteristics: simple, efficient and extremely rich data mining/data analysis algorithm implementation; Based on NumPy, SciPy and matplotlib, the whole process from data exploratory analysis, data visualization to algorithm implementation is integrated. Open source, there are very rich learning documents.
Caffe focuses on volume, neural network and image processing. But Caffe hasn't updated for a long time. Jia, the main developer of this framework, also jumped to Google this year. Maybe the former overlord will give way to others.
Theano is a very flexible Python machine learning library. It is very popular in the research field, very convenient to use and easy to define complex models. Tensorflow's API is very similar to Theano's. I also shared the topic about Theano at this year's QCon conference in Beijing.
Jupyter notebook is a powerful python code editor based on ipython. It is deployed on web pages and can be used for interactive processing very conveniently. It is very suitable for algorithm research and data processing.
Torch is an excellent machine learning library. It is implemented by a relatively small lua language. But because LuaJIT is used, the efficiency of the program is excellent. Facebook focuses on Torch in the field of artificial intelligence, and even now it has launched its own upgrade framework Torchnet.
There are so many frameworks for deep learning, is there a feeling of letting a hundred flowers blossom? What I want to focus on today is tensor flow. This is an open source development framework for machine learning launched by Google on 20 15, and it is also the second generation deep learning framework of Google. Many companies have developed many interesting applications with TensorFlow, and the effect is very good.
What can TensorFlow do? The answer is that it can be applied to regression models and neural networks for deep learning. In deep learning, it integrates distributed representation, convolutional neural network (CNN), recurrent neural network (RNN) and long-term and short-term memory artificial neural network (LSTM).
The first concept to understand about tensor flow is tensor. The definition of this word in the dictionary is tensor, which is a multilinear function and can be used to express the linear relationship between some vectors, scalars and other tensors. In fact, this expression is difficult to understand. Explain in my own language that tensor is just an "n-dimensional array".
Using TensorFlow, as a programmer, we must understand several basic concepts such as TensorFlow: it uses graphs to represent computing tasks; Execute graphics in a context called a session; Using tensor to represent data; Maintain status through variables; Feed and fetch can be used to assign values to arbitrary operations or get data from them.
In a word, TensorFlow is a data flow graph computing environment with state diagrams. Each node is doing data manipulation, and then provides dependency and directionality to provide a complete data flow.
TensorFlow installation is simple, but the CUDA version supported by the installation package provided by official website is 7.5. Considering the exciting new features of CUDA 8 and the fact that it will be officially released soon. Maybe you want to consider experiencing CUDA 8 immediately, so you can only get it by compiling Tensorflow source code. TensorFlow already supports Python2.7 and 3.3+.
In addition, programmers using Python need to install some necessary libraries, such as numpy, protobuf and so on. CuDNN is recognized as the best development library for convolution processing, so please be sure to install it. The installation of regular Tensorsorflow is very simple, and one command is enough:
$ pip3 Installation-Upgrade /anishathalye/neural-style. Belarusian modern impressionist artist Leonid Afremov is good at expressing urban and landscape themes with strong colors, especially his series of rain scenes. He is used to using large color blocks to create light and shadow effects, and he has a very accurate grasp of reflective objects and environmental colors.
So I found a photo of Shanghai Oriental Pearl TV Tower. I hope to learn Leonid Afremov's painting style through Tensorflow, and treat this photo of the Oriental Pearl into that kind of work style with rich light, shadow and color. Using Tensorflow and the code of the above-mentioned project, a thousand iterations were carried out on an AWS p2 type instance, and the processing results shown below were obtained.
The code of this processing is only 350 lines, and the model uses a star VGG who became famous in the ImageNet competition of 20 14. This model is very good, characterized by "going to depper".
TensorFlow makes such works, not only as entertainment to make everyone laugh, but also to do more interesting things. If we extend the processing power just now to the video, we can see the effect as shown below. It has been processed into such a new video style, which has the style of Van Gogh's masterpiece "Starry Moon Night".
Imagine if this processing power is applied to more fields, what magical effect will it have? The bright future gives us infinite reverie. In fact, the application development in many fields we are currently engaged in can be changed by using neural networks and deep learning. For deep learning, it is not difficult to master. Every programmer can easily master this technology and make use of his own resources, so that we can quickly become deep learning program developers.
Concluding remarks
We can't predict what the future will be like. The writer ray kurzweil wrote the book Singularity Approaching in 2005. In this book, he clearly tells us that that era is coming. As people before the dawn of that era, can we use our learning ability to speed up this process and realize this dream?
The Development of Artificial Intelligence in China
The era of artificial intelligence has undoubtedly arrived. What this era needs is, of course, engineers who have mastered artificial intelligence and solved specific problems. Frankly speaking, this kind of engineer is relatively rare in the market. The salary in the workplace can show the demand for such engineers. With the development of artificial intelligence today, as far as the academic itself is concerned, it has the ability of large-scale industrialization.
Therefore, it is imperative for engineers to master the application technology of artificial intelligence as soon as possible. At present, the online learning materials about artificial intelligence can be said to be "sweating like a pig", and those engineers who have the ability to learn quickly will definitely stand out in the tide of artificial intelligence.
The environment for the development of artificial intelligence industry in China is ready. No matter from the entrepreneurial environment, the quality of personnel, or even the opportunities in the market, all the conditions for industrial transformation are fully met. Compared with the United States, in many fields of artificial intelligence, China's performance can also be said to be uncompromising. As far as artificial intelligence technology is concerned, engineers in China are on the same starting line as the best technical team in the world.
Time waits for no one, and engineers in China have the opportunity to show their talents in this field. However, it is worth noting that two points should be avoided: First, aim too high and blindly compare with foreign countries. After all, accumulation has its length and specialization, so we should base ourselves on the existing accumulation and seek a gradual breakthrough. The second is to be eager for success and blindly pursue the market. The engineering of artificial intelligence needs a lot of basic accumulation, which can't be copied overnight.
The achievements of Chinese scientific and technological personnel in the field of artificial intelligence are obvious to all. In an article by Wang Yonggang, he counted the "deep learning" papers included in SCI from 20 13 to 20 15, and found that China had surpassed the United States to become the leader in 20 14 and 20 15.
Another surprise to me is that JeffDean of Google published a paper entitled "Tensor Flow: Asymmetry for Large-scale Machine Learning" on 20 16. Among the 22 authors of the article, the author whose name is obviously China has reached 1/5. If you want to list China/China giants in the field of artificial intelligence, Andrew Ng, Sun Jian politician, Yang Qiang, Huang Guangbin, Ma Yi, Zhang Dapeng ... you can easily list a long list of names.
For China, the urgent task at present is the industrialization of artificial intelligence technology. Only in this way can we say that the advantages in the field of scientific research/information are transformed into overall and comprehensive advantages. In this regard, China is the world's largest consumer market and manufacturing power, and we have every opportunity to become a leader in this field with the help of market advantages.
Innovative enterprises in Silicon Valley
Although I have been to Silicon Valley many times, I have never been able to work there for a long time. In the market of artificial intelligence, we hear more about the actions of some large technology companies such as Google, Apple, Intel and Amazon. However, there are still a large number of small start-ups in the American market with amazing performance in the field of artificial intelligence. Just take companies in Silicon Valley as an example:
Captricity, which provides information extraction of handwritten data;
VIVLab, developing virtual assistant service for speech recognition;
TERADEEP, using FPGA to provide an efficient convolutional neural network scheme;
There is also NetraDyne, which provides driverless solutions.
This list can be very long, and many teams trying to make history by using artificial intelligence technology are building their dreams. These teams and their fields of focus are worth learning and experiencing.
- Related articles
- What is the bottom of the sea like?
- No high-end camera! Meitu Xiu Xiu produces panoramic effects.
- Introduction and usage of grey card
- Huangshan rape flower viewing
- Highlights of automobile operation, safety starts from WEY VVV 55, and the most beautiful domestic cars are built with luxury.
- 2019 Yongsheng Sanchuan 3rd Lotus Festival Appreciation Guide
- How to remove Yanji’s makeup after taking photos
- What about Beijing Black Ice Tiandi Photography Art Co., Ltd.?
- How many kilometers is there between Zhicheng and Zhijiang Poplar?
- How many points can Shandong's comprehensive scores go to undergraduate courses?