Artificial intelligence (AI) is not a new subject, it was a serious subject of research and study since modern computers have taken birth. It was a full-fledged subject in computer science academics. But the real progress in this area was not that great before year 2000, where intelligence of a computer was not that impressive enough for a commercial success. One of the early notable achievement was IBM made computer playing chess against world champion chess players and winning. The earlier efforts in artificial intelligence was more of developing extremely complex algorithms altogether in one single program/code which is somewhat closer to how a grown up brain thinks.
But today AI is top trending hot topic, in fact it has come to the main stage significantly in year 2013 itself. Aart De Geus, CEO and chairman of Synopsys was referring to the growing trend of AI in one of its popular user event called SNUG held in India in 2013. Recently AI is gaining significant commercial momentum where it is forecasted to become US dollars 70 billion industry by 2020.
Over the time, Computer researchers found machines can also learn from level-zero intelligence to whatever level possible, like the way just born toddlers gets smarter and intelligent as they grow. This is called machine learning which is basically lead to gaining step-by-step artificial intelligence which also lead to deep learning.
Pic: Aibo Robot from Sony with AI features
This kind of step-by-step implementation of complex computations based on machine learning follows the methodology of how we evolve in thinking rather than exactly copying. Like search engines suggesting spell errors to know contextually getting idea of user's requirements. Google did it, and has progressed a lot. Some other examples include Classifying images, and classifying the online shoppers into various groups like the way Flipkart and Amazon are doing.
However still it is not the machine which is rewriting its code, it's the software developer/programmer who continuously add lot more do-if-while-loops and much more coding insertions in the program.
Vision processing and speech analysis are the two major application of artificial intelligence, where the computer analyses the picture like the way we people do, and listen to the speech and convert it in to text. Amazon's Echo is a good example.
What is really driving AI is the availability of huge amount of computing resources both at the client computing devices such as, smart phones, and notebook computers as well as cloud computer platforms. The Cloud computing environment leader IBM is ahead in offering developers an environment to build applications with machine learning capabilities for picture recognition, voice recognition, generically similar language translation (even otherwise) and such media and big data analysis.
In case of machine learning, it's basically detecting deviation from a set pattern, or identifying some new patterns which may be repetitive or non-Repetitive. Gaming application is very good example where such pattern kind of changes is normal.
Custom processing semiconductor chips or FPGA-based accelerators can do better than software -based machine learning. A multistage comparator logic circuitry/ flash ADC and any logical circuits wired to execute single instruction is faster than a if-while-loop of a typical traditional CPU. That's the reason Google has developed its own processing unit called Tensor Processing Unit (TPU) dedicated to running machine learning algorithms.
Artificial intelligence is a mix of high performance computing and machine learning. Some of the algorithms of a chess playing computer can be seen in engineering design software like complex multilayer PCB design or a SOC semiconductor VLSI design, where you got to place IP blocks or components in such a way that they can be connected in a smart way. If the signal of a component or a block to be rooted to a diagonally opposite component on a PCB board or a complex SOC chip, software finds the best and shortest path. The smart software can avoid this by initially placing the most connected components nearby, and also taking care of interference caused by various electrical and RF para meters so that the software ensures the quality signal by placing the source and destination components in such a way that the routing is shorter and interference is lesser. In fact most of the EDA software companies use software which is no less complex than IBM's chess playing software.
If you define artificial intelligence as computers processor having some smart decision taking ability, one would say some of the latest chip design software has AI capabilities and is no less complex than the google's search engine software. With the supercomputer like server hardware available, there is good rise of supersmart software with Artificial intelligence capability available for various applications.
Well if that is about application in engineering design, there is new wave of consumer applications AI going to influence, the most visible now is self driving cars and the smart robots. These are programmed using C and C++ not the Lisp. That's why the title for this article is named as "Artificial intelligence is not the old wine in new bottle"
When it comes to software programming, it is not one time coded program. The artificial intelligence enabled system should be able to reprogram its program. Lisp( list processing) was one such very earlier high-level programming language exclusively for artificial intelligence, where data and programs are list structures, Lisp allows manipulation of list structures which manipulates both programs as well as data. Scripting language Python and C++ is also praised for its ability to support artificial intelligence programming.
There is an interesting artificial intelligence company called Cogitai. Cogitai was founded by three artificial intelligence technology research experts Mark Ring, Peter Stone , and Satinder Singh Baveja. These researchers wanted to give the machines ability to learn based on their Continuous interaction with real world, so that they can take some smart decision. Cogitai has also formed something called "Brain Trust" consisting of best Academic AI researchers, who will be actively engaged in technology development.
Sony whose business has huge stake in video and audio computing gadgets, is looking at artificial intelligence opportunities strategically. It has joined hands with Cogitai by investing in Cogitai. Both the companies plans to collaborate in developing artificial intelligence technologies using deep reinforcement learning with prediction technology.
AI, though a old concept, but earlier it was more of a complex programming which has not yielded good results, whereas now it is more of machine learning, where machines are trained with examples rather than through explicit programming. The good example for deep reinforcement learning is the recent success of AlphaGo from Google DeepMind which could do better than humans.
Sony and Cogitai both consider the next challenge for AI to be the creation of systems that can autonomously and continually learn from experience -autonomous cognitive development systems (or continual learning systems) that exhibit flexible competence and can learn to react properly in a wide variety of task domains.
These systems will allow machines to autonomously build up their own knowledge and skills from interactive experience with the real world, and then to share and extend their knowledge, skills and understanding with each other.
Sony has experience hands in AI where it has earlier launched a robot called AIBO, a fully autonomous robot with some interesting AI technologies such as face recognition and speech recognition featured inside the robot. Sony also established Sony Intelligence Dynamics Laboratory in 2004, which studied autonomous development of intelligence called Intelligence Dynamics. Sony has used AI tools in its xperia smart phones through an app called "AR Effect" which uses augmented reality technology for picture editing and also facial recognition login capabilities utilized by PlayStation 4. Sony also employed speech recognition technologies in its wearable device Project N, a neckband-style wearable device for a hands-free interactive interface for accessing music and audio information, without the need for an earpiece.
The smart phone or any such smart device is starting to really becoming Smart literally now more than any time before, (not really a few years back when the first model of iPhone was launched). It's like kids learning things at their own pace. Siri app of iPhone is one example of AI entry into Smart phone. In terms artificial vision intellgence, machines are hardly close to some of the pet animals, like Aibo robot hardly matches to a real living dog's intelligence. But let us not look at the present, look at future robot 10 years from now! Well, we may say hardly matches the intelligence of a kid or even more.
If you are a computer student looking for open source software library for Machine intelligence, check out the website https://www.tensorflow.org/ by Google. The website says it's flexible architecture helps in deploying computation to one or more CPU or GPUs in your notebook/desktop, server or your mobile hardware with a single API.
If you're not yet googled out the link for free online course on artificial intelligence, check out the one from U C Berkley's free course on edx at https://www.edx.org/course/artificial-intelligence-uc-berkeleyx-cs188-1x.
To give you some idea on latest AI chips launched recently:
1. Lenovo to use Myriad 2 Vision Processing Unit (VPU) and custom computer vision algorithms in its virtual reality products. Myriad 2 is a computer vision processor for head tracking, gesture recognition, and blending multiple video streams into interactive VR video for use in devices such as compact handheld and head-worn devices.
The Myriad 2 h 12 programmable vector processor cores with built-in Image Signal Processor and hardware accelerators savings the on-board CPU and GPU from such tasks and only consuming one Watt of additional power. Lenovo to make available the VR products powered by Movidius VR chip in 2nd half 2016.
2. A start-up named KnuEdge launched brain like processor chip and software with high-quality voice recognition and authentication. The new AI processor named Knupath Hermosa processor is different from the present processor architectures such as Von Neumann style architecture. Vector processing enabled Knupath is a 256 DSP-like core artificial intelligence (AI) chip due to its ability to learn C++ Software and get in to machine learning and deep learning process. Knupath uses heterogeneous architecture and is based on biological principles.