Category

নিউরাল নেটওয়ার্ক: আপনি এটি এত সহজ

পেয়েছেনfe command line tool which will then train a neural network. You can then load this trained net using a Python program as well as use it to do something, image classification for example.

CNTK

Languages: Python, C++, C#

This is the Microsoft Cognitive Toolkit (CNTK) as well as runs on Windows and Linux. They’re currently working on a version to be used with Keras.

Keras

ভাষা: পাইথন

Written in Python, Keras uses either TensorFlow or Theano underneath, making it easier to use those frameworks. There are likewise plans to support CNTK as well. work is underway to integrate Keras into TensorFlow resulting in a separate TensorFlow-only version of Keras.

TF Learn

ভাষা: পাইথন

Like Keras, this is a high-level library built on top of TensorFlow.

FANN

Languages: supports over 15 languages, no GPU support

This is a high-level open source library written in C. It’s limited to fully connected as well as sparsely connected neural networks. However, it’s been popular over the years, as well as has even been included in Linux distributions. It’s recently shown up here on Hackaday in a robot that learned to walk using reinforcement learning, a machine learning technique that often makes use of neural networks.

Torch

Languages: Lua

Open source library written in C. Interestingly, they say on the front page of their website that Torch is embeddable, with ports to iOS, Andoid as well as FPGA backends.

PyTorch

ভাষা: পাইথন

PyTorch is relatively new, their website says it’s in early-release beta, however there seems to be a lot interest in it. It runs on Linux and OS-X as well as uses Torch underneath.

There are no question others that I’ve missed. If you have a particular preferred that’s not here then please let us know in the comments.

Which one should you use? Unless the programming language or OS is an issue then one more factor to keep in mind is your skill level. If you’re uncomfortable with math or don’t want to dig deeply into the neural network’s nuances then chose a high-level one. in that case, stay away from TensorFlow, where you have to learn more about the API than Kera, TFLearn or the other high-level ones. frameworks that highlight their math functionality usually require you to do more work to produce the network. one more factor is whether or not you’ll be doing fundamental research. A high-level framework may not enable you to gain access to the innards enough to start making crazy networks, perhaps with connections spanning multiple layers or within layers, as well as with data flowing in all directions.

অনলাইন সেবাসমূহ

Are you you’re looking to add something a neural network would offer to your hack however don’t want to take the time to learn the intricacies of neural networks? For that there are services offered by connecting your hack to the internet.

We’ve seen countless examples making use of Amazon’s Alexa for voice recognition. Google likewise has its Cloud machine learning services which includes vision as well as speech. Its vision service have shown up here using Raspberry Pi’s for candy sorting as well as reading human emotions. The Wekinator is aimed at artists as well as musicians that we’ve seen used to train a neural network to respond to various gestures for turning things on an off around the house, as well as for making a virtual world’s tiniest violin. Not to be left out, Microsoft likewise has its Cognitive services APIs, including: vision, speech, language as well as others.

GPUs as well as TPUs

Iterating through a neural network
Training a neural network requires iterating through the neural network, forward as well as then backward, each time improving the network’s accuracy. Up to a point, the more iterations you can do, the much better the final accuracy will be when you stop. The number of iterations might be in the hundreds or even thousands. With 1980s as well as 1990s computers, achieving enough iterations might take an unacceptable amount of time. According to the article, Deep learning in Neural Networks: An Overview, in 2004 an increase of 20 times the speed was achieved with a GPU for a fully connected neural network. In 2006 a 4 times increase was achieved for a convolutional neural network. By 2010, increases were as much as 50 times faster when comparing training on a CPU versus a GPU. As a result, accuracies were much higher.

Nvidia Titan Xp graphics card. image Credit: Nvidia
How do GPUs help? A huge part of training a neural network involves doing matrix multiplication, something which is done much faster on a GPU than on a CPU. Nvidia, a leader in making graphics cards and GPUs, created an API called CUDA which is used by neural network software to make use of the GPU. We point this out since you’ll see the term CUDA a lot. With the spread of deep learning, Nvidia has added more APIs, including CuDNN (CUDA for Deep Neural Networks), a library of finely tuned neural network primitives, as well as one more term you’ll see.

Nvidia likewise has its own single board computer, the Jetson TX2, designed to be the brains for self-driving cars, selfie-snapping drones, as well as so on. However, as our [Brian Benchoff] has pointed out, the price point is a little high for the typical hacker.

Google has likewise been working on its own hardware acceleration in the type of its Tensor Processing unit (TPU). You might have noticed the similarity to the name of Google’s framework above, TensorFlow. TensorFlow makes heavy use of tensors (think of single as well as multi-dimensional arrays in software). According to Google’s paper on the TPU it’s designed for the inference phase of neural networks. Inference refers not to training neural networks but to using the neural network after it’s been trained. We haven’t seen it used by any type of frameworks yet, however it’s something to keep in mind.

Using other People’s Hardware

Do you have a neural network that’ll take a long time to train but don’t have a supported GPU, or don’t want to tie up your resources? in that situation there’s hardware you can use on other machines accessible over the internet. One such is FloydHub which, for an individual, costs only penny’s per hour with no monthly payment. one more is Amazon EC2.

Datasets

Training neural network with labeled data
We said that one of the breakthroughs in neural networks was the availability of training data containing large numbers of samples, in the tens of thousands. training a neural network using a supervised training algorithm involves giving the data to the network at its inputs however likewise telling it what the expected output should be. in that situation the data likewise has to be labeled. If you give an image of a horse to the network’s inputs, as well as its outputs say it looks like a cheetah, then it needs to know that the error is large as well as more training is needed. The expected output is called a label, as well as the data is ‘labeled data’.

Many such datasets are offered online for training purposes. MNIST is one such for handwritten character recognition. ImageNet and CIFAR are two different datasets of labeled images. many more are listed on this Wikipedia page. many of the frameworks listed above have tutorials that include the necessary datasets.

That’s not to say you absolutely need a large dataset to get a respectable accuracy. The walking robot we previously mentioned that used the FNN framework, used the servo motor positions as its training data.

অন্যান্য উৎস

Unlike in the 80s as well as 90s, while you can still buy hardcopy books about neural networks, there are numerous ones online. two online books I’ve enjoyed are Deep learning by the MIT press as well as Neural Networks and Deep Learning. The above listed frameworks all have tutorials to help get started. And then there are countless other websites as well as YouTube videos on any type of topic you search for. I discover YouTube videos of recorded lectures as well as seminar talks very useful.

ভবিষ্যৎ

Raspberry Pi 7 with GPU
Doubtless the future will see more frameworks coming along.

We’ve long seen specialized neural chips as well as boards on the market but none have ever discovered a huge market, even back in the 90s. However, those aren’t designed specially for serving the real growth area, the neural network software that everyone’s working on. GPUs do serve that market. As neural networks with millions of connections for image as well as voice processing, language, as well as so on make their way into smaller as well as smaller consumer devices the need for more GPUs or processors tailored to that software will hopefully result in something that can ended up being a new element on a Raspberry Pi or Arduino board. though there is the possibility that processing will remain an online service instead. EDIT: It turns out there is a GPU on the Raspberry Pi — see the comments below. That doesn’t mean all the above frameworks will make use of it though. For example, TensorFlow supports Nvidia CUDA cards only. however you can still use the GPU for your own custom neural network code. various links are in the comments for that too.

There is already competition for GPUs from ASICs like the TPU and it’s possible we’ll see more of those, possibly ousting GPUs from neural networks altogether.

As for our new computer