Six major areas worthy of artificial intelligence

To familiarize beginners with AI, here are six notable AI areas, describing what they are, why they are important, how they are used today, and the companies that research them.

In the past 10 years, the AI ​​field has made great progress. As the giants continue to use the media to clarify their long-term strategy of value AI, many people have already praised speech recognition and automatic driving. However, AI is often confused with machine learning. In fact, AI is a multidisciplinary field. The ultimate goal is to build machines that can perform tasks and cognitive functions. To achieve this goal, machines must be able to learn these capabilities on their own.

To familiarize beginners with AI, here are six notable AI areas, describing what they are, why they are important, how they are used today, and the companies that research them.

1. Reinforcement learning

RL is a paradigm that learns through temptation and is inspired by the new tasks of human learning. In a typical RL setup, the AI ​​is given the task of observing its current state in a digital environment, receiving the results of each action from the environment and giving incentive feedback such that it knows whether the action promotes or hinders its progress. Therefore, AI must find the best way to get rewards. This method is used by Google's DeepMind. In the real world, an example of RL is the task of optimizing the cooling of Google's data center energy efficiency, an RL system that achieves a 40% reduction in cooling costs. An advantage of using RL in an environment that can be simulated, such as a video game, is that training data can be generated at very low cost. This is in stark contrast to overseeing deep learning tasks that often require training data that is expensive and difficult to obtain from the real world.

Scope of application: Multiple AIs learn or interact in their own environment, learn from each other in the same environment, learn navigational 3D environments, such as maze or urban street autopilot, inverse reinforcement learning to summarize observed behavior through learning tasks Goal (for example, learning to drive).

Company: Google DeepMind, Prowler.io, Osaro, MicroPSI, Maluuba/Microsoft, NVIDIA, Mobileye.

2. Generate a model

In contrast to the discriminant model used for classification or regression tasks, the generated model learns the probability distribution on the training examples. By sampling from this high-dimensional distribution, a new example of generating a model output similar to the training data is generated. This means, for example, that the generated model trained on the real image of the face can output a new composite image like a face. For a look at how these models work, check out Ian Goodfellow's awesome NIPS 2016 tutorial. He introduced the architecture, generated confrontation networks (GANs), and provided an unsupervised learning path. GANs have two neural networks: a generator that accepts random noise as input, whose task is the synthesized content (for example, an image); a discriminator that has learned what the real image looks like, and the task is to identify that the image is true. Still fake. Confrontation training can be considered a game, and the machine must iteratively learn how to make the discriminator no longer distinguish between the generated image and the real image. This framework is expanding to many data models and tasks.

Scope of application: Simulating the possible future of a time series (eg in intensive learning planning tasks); super-resolution images; restoring 2D images into a three-dimensional structure; summarizing from small tag data sets; one input can produce multiple correct outputs ( Such as predicting the next frame of the video; creating a natural language for use in the session interface; when not all tags can be semi-supervised; art style transfer; synthetic music, sound.

Companies: Twitter, Adobe, Apple, Prisma, Jukedeck, CreaTIve.ai, Gluru, Mapillary, Unbabel.

3. Network with memory storage

In order for AI systems to be promoted in a diverse real-world environment, they must be able to continually learn new tasks and remember how to perform all tasks in the future. However, traditional neural networks usually cannot perform such learning. This shortcoming is called catastrophic forgetting. This occurs because when the network is trained to resolve task B, the weight in the network for resolving task A changes.

However, there are several powerful architectures that can give neural networks varying degrees of memory, including long- and short-term memory networks (a variant of a regular neural network) that can process and predict time series. DeepMind's micro-neural computer, combined with neural networks and storage systems, can learn and navigate its complex data structures.

Scope of application: can be extended to learning in new environments; robotic arm control tasks; automatic driving; time series prediction (eg financial markets, video, internet of things); natural language understanding and next word prediction.

Company: Google DeepMind, NNaisense, SwiftKey / Microsoft Research, Facebook AI Research.

4. Learn and build smaller models with less data

The deep learning model is worth noting that a lot of training data is needed. Without large-scale training data, deep learning models will not converge to their optimal settings and will not perform well on complex tasks such as speech recognition or machine translation. This data requirement only grows when a single neural network is used to solve the problem end-to-end, such as taking the original audio recording of the speech as input and outputting the text transcription of the speech.

If we want AI to solve the problem of less, expensive, and time-consuming training data, then development can learn the model of the optimal solution from fewer examples (ie, learning from one or zero). When training small data sets, challenges include overfitting, difficulty handling outliers, and differences in data distribution between training and testing. Another way is to migrate learning.

Scope of application: Imitate the performance of deep networks by learning to train shallow networks, initially accepting large-scale training data; using less parameters, but the same performance deep model architecture (such as SqueezeNet); machine translation.

Companies: Geometric Intelligence/Uber, DeepScale.ai, Microsoft Research, Google, Bloomsbury AI.

5. Hardware for training

The main catalyst for AI advancement is the reuse of graphics processing units (GPUs) to train large neural network models. Unlike a central processing unit (CPU) that computes in a sequential manner, the GPU provides a massively parallel architecture that can handle multiple tasks simultaneously. Considering that neural networks must handle large amounts (usually high-dimensional data), training on the GPU is much faster than CPU. This is why NVIDIA can be hot in recent years.

However, GPUs are not specifically designed to train AI, they appear to render video and game graphics. The high computational accuracy of the GPU is not necessary and there are memory bandwidth and data throughput issues. This opens up opportunities for startups to create chips designed specifically for high-dimensional machine learning applications. Improve new memory bandwidth with new chips, with higher compute density, efficiency and performance per watt. This is achieved by faster and more efficient model training → better user experience → rapid iteration of users and products → creating larger data sets → improving the performance of the model through optimization.

Applications: Rapid training models (especially on graphics); Improved energy and data efficiency when making predictions; IoT devices running AI systems; IaaS; Autopilots, drones and robots.

Companies: Graphcore, Cerebras, Isocline Engineering, Google (TPU), NVIDIA (DGX-1), Nervana Systems (Intel), Movidius (Intel), Scortex

6. Simulation environment

As mentioned earlier, generating training data for an AI system is often challenging. More importantly, AI must be promoted to many situations to be useful in the real world. Therefore, developing physics and behaviors that simulate the real world will provide us with a good environment to train AI. These environments present the raw pixels to the AI, and then the AI ​​performs actions to resolve the targets they have set (or learned). In training, these simulation environments can help us understand how AI systems learn and how to improve them, but also provide us with models that can potentially be transferred to real-world applications.

Applications: Learning to drive; Manufacturing; Industrial design; Game development; Smart city.

HNB Device

"Non-burning, nicotine for users, low tar content. As the heating temperature (below 500℃) is lower than the combustion temperature of traditional cigarettes (600-900℃), the harmful components produced by tobacco high-temperature combustion pyrolysis and thermal synthesis are reduced, and the release amount of side-flow smoke and environmental smoke (second-hand smoke) is also greatly reduced."

Heating non - combustion products are electronic devices containing tobacco. When you heat them, they produce a nicotine-containing vapor that you can inhale.

They are different from traditional cigarettes and work by heating tobacco to a very low temperature. Tobacco is heated to 350 ° C in a heat-incombustible device, while traditional cigarettes burn at up to 900 ° C.

Still, the temperature at which non-combustion products are heated is high enough to vaporize and inhale harmful chemicals.

Although both are electronic devices, heated non-combustible products are also different from e-cigarettes or steam devices. These usually use chemical liquids and do not necessarily contain nicotine. E-cigarettes tend to heat liquids to around 250 degrees Celsius to produce vapor.

Hnb Device Oem,Hnb Device Patent,Hnb Device,Hnb Device For Sale

Shenzhen MASON VAP Technology Co., Ltd. , https://www.disposablevapepenfactory.com

Posted on