Here’s this week’s newsletter:

“An important shift from a mobile first world to an AI first world,” declared Google CEO Sundar Pichai, summarizing the Google I/O 2017 keynote yesterday. His description of the changes underway at his company apply to nearly every business today.
Almost all of Google’s announcements touched on AI in one way or another. From introducing a second generation of TPU chips to accelerate deep learning for such applications as cancer research and DNA sequencing, to a broad effort to get Google Home on as many screens and devices as possible. (As I wrote last week, your living room is the next battle ground for intelligent assistants.) The company also shared that it’s speech recognition technology was now better than 95 percent accurate.
It’s all about a transition from “searching and organizing the world’s information to AI and machine learning,” Pichai said.
Is such a change underway at your company, too? If so, drop me a line and tell me what’s afoot.
For AI coverage, send news tips to Khari Johnson and guest post submissions to John Brandon. Please be sure to visit our AI Channel.
Thanks for reading,
Blaise Zerega
Editor in Chief

Full article: https://venturebeat.com/2017/05/18/ai-weekly-google-shifts-from-mobile-first-to-ai-first-world/
Contact Information:

AI Weekly: Google shifts from mobile-first to AI-first world

A total of 34 artificial intelligence startups were acquired in the first quarter of this year, more than twice the amount of activity in the year-ago quarter, according to the research firm CB Insights.
Tech giants seeking to reinforce their leads in artificial intelligence or make up for lost ground have been the most aggressive buyers. Alphabet Inc’s (GOOGL.O) Google has acquired 11 AI startups since 2012, the most of any firm, followed by Apple Inc (AAPL.O), Facebook Inc (FB.O) and Intel Corp (INTC.O), respectively, according to CB Insights.
The companies declined to comment on their acquisition strategies. A spokesman for Apple did confirm the company’s recent purchase of Lattice Data, a startup that specializes in working with unstructured data.
The first quarter also saw one of the largest deals to date as Ford Motor Co (F.N) invested $1 billion in Argo AI, founded by former executives on self-driving teams at Google and Uber Technologies Inc [UBER.UL].
Startups are looking to go deep on applications of artificial intelligence to specific fields, such as health and retail, industry observers say, rather than compete directly with established companies.
“What you will see is very big players will build platform services, and startup communities will migrate more to applied intelligent apps,” said Matt McIlwain, managing director of Madrona Venture Group.
Healthcare startup Forward, for example, is using artificial intelligence to crunch data that can inform doctors’ recommendations.
“For people who really want to focus on core AI problems, it makes a lot of sense to be in bigger companies,” said Forward Chief Executive Officer Adrian Aoun, who previously worked at Google. “But for folks who really want to prove a new field, a new area, it makes more sense to be separate.”
Artificial intelligence companies that do remain independent field a steady stream of suitors: Matthew Zeiler, chief executive of Clarifai, which specializes in image and video recognition, said he has been approached about a dozen times by prospective acquirers since starting the company in late 2013.
Clarifai’s pitch to customers such as consumer goods company Unilever Plc (ULVR.L) and hotel search firm Trivago is bolstered by its narrow focus on artificial intelligence.
“(Google) literally competes with almost every company on the planet,” Zeiler said. “Are you going to trust them with being your partner for AI?”
Tech giants have been locked in a bidding war for academics specializing in artificial intelligence. Startups rarely have the capital to compete, but a company with a specialized mission can win over recruits, said Vic Gundotra, chief executive of AliveCor, which makes an AI-driven portable heart monitor.
“They say, ‘I want to come here and work on a project that might save my mother’s life,’” Gundotra said.

(Reporting by Julia Love; Editing by Jonathan Weber and Lisa Shumaker)

http://www.reuters.com/article/us-tech-startups-ai-idUSKBN18M2ED?utm_source=applenews

Contact Information:
http://www.reuters.com/article/us-tech-startups-ai-idUSKBN18M2ED?utm_source=applenews

Apple is reportedly working on a chip called the Apple Neural Engine, which would be dedicated to carrying out artificial intelligence (AI) processing on its mobile devices. The addition of this type of capability would catalyse the use of AI on mobile devices. Although artificial intelligence is being used extensively already to power digital assistants like Siri and Google Assistant, these technologies rely on computer servers to process data sent to them rather than the processing happening on the mobile device itself.
Augmented reality and digital assistants are not the only applications of AI that will become important on mobile devices. Once the capability is made available to all mobile application developers, it will bring new types of capabilities to mobile devices. Health applications for example will be able to tell when body readings from sensors on the phone or associated wearable devices are abnormal and need acting on. But the uses are potentially limitless and will bring about a new phase in how we rely on applications and our mobile devices in everyday life. And they will work, even when the device is not connected to the Internet.
Strictly speaking, a specific processor is not an essential requirement for using AI on a mobile phone. Chip-maker Qualcomm for example has provided a software-based approach called the Snapdragon Neural Processing Engine, to allow developers using their chips to incorporate AI into their software. An example of this is a car that monitors the driver using a camera and warns when they are using their smartphone or driving erratically.
Specific AI hardware however greatly speeds up the process called “machine learning” and allows for more sophisticated types of AI to be used. Google’s AI hardware, called the Tensor Processing Unit, is 15 to 30 times faster than the fastest computer processors (CPUs) and graphic processors (GPUs) that power computers today. These TPUs were what gave Google’s DeepMind its ability to beat the world champions of the Chinese game of Go. These TPUs also have vastly improved Google’s automated language translation software, Google Translate.
The inclusion of AI in mobile software is going to massively increase the potential usefulness of that software and through that, how much we come to depend on the mobile phone. Our state of health, for example, is really about how we are doing relative to how we normally feel. Changes in behaviour can signal changes in mental health, including conditions like dementia and Parkinson’s, to precursors of illnesses such as diabetes, respiratory and cardiovascular diseases. Our phones could monitor patterns of activity and even how we walk. This ability would be based on the software learning our normal patterns and once having detected a change, decide what to do about it.
The phone would be part of a self-directed ecosystem of intelligent and autonomous machines, including cars. Not only is the driving of autonomous cars completely dependent on AI to function, it is likely that people will eventually share the use of these cars when needed, rather than owning one themselves. AI will again be essential for managing how this sharing functions to manage the most efficient distribution of cars, directing which cars need to pick up which clients. To do this, the scheduling AI service will need to liaise with AI software on everyone’s phones to determine where and when they will be at a given location and where they need to get to.
AI on a mobile device will also increasingly be used to keep the device protected, checking if applications and communications are secure or likely to be a threat. This technology is already being implemented in smart home appliances, but as software. The addition of special AI chips will allow them to be much faster and to do more. Researchers are also looking at analysing the way we move as a means of uniquely identifying the wearer of a device.
AI will essentially be able to fill in for activities and the application of understanding and knowledge that not everyone possesses. Even if they do, remembering to do something, even when it is in your own best interest, is sometimes hard.
There is a counter argument to the benefits of increasing the intelligence of mobile devices however. This is the fear that as we come to rely on devices to do things, we will lose the ability to maintain that skill and that this will eventually impact on a person’s overall cognitive ability, or at least on their ability to operate without the AI. The entire successful outcome of having an AI assisting depends on the user following the advice, and this is something that people may not be that good at doing.
The Conversation
Full Article: https://www.australasianscience.com.au/article/science-and-technology/apple-working-ai-chip-mobile-devices-bringing-them-new-level-intellig
Disclosure
David Glance owns shares in Apple
Contact Information:
https://www.australasianscience.com.au/article/science-and-technology/apple-working-ai-chip-mobile-devices-bringing-them-new-level-intellig

Cisco (NASDAQ: CSCO) announced today it has completed the acquisition of MindMeld Inc., a privately held artificial intelligence (AI) company based in San Francisco. MindMeld has pioneered the development of a unique AI platform that enables customers to build intelligent and human-like conversational interfaces for any application or device. Through its proprietary machine learning (ML) technology, MindMeld delivers incredible accuracy to help users interact with voice and chat assistants in a more natural way.
As chat and voice quickly become the interfaces of choice, MindMeld’s AI technology will enable Cisco to deliver unique experiences throughout its portfolio, starting with collaboration. This acquisition will power new conversational interfaces for Cisco’s collaboration products, revolutionizing how users will interact with our technology, increasing ease of use, and enabling new cognitive capabilities.
The MindMeld team joins the Cloud Collaboration group under the leadership of Jens Meggers, senior vice president and general manager, as the Cognitive Collaboration team.
Cisco acquired MindMeld for $125 million in cash and assumed equity awards.
More at http://www.cisco.com/.
Forward-Looking Statements
This press release may be deemed to contain forward-looking statements, which are subject to the safe harbor provisions of the Private Securities Litigation Reform Act of 1995, including statements regarding the acquisition powering new conversational interfaces for Cisco’s collaboration products, revolutionizing how our users will interact with our technology, increasing ease of use, and enabling new cognitive capabilities, the expected benefits to Cisco and its customers from completing the acquisition, and plans regarding MindMeld personnel. Readers are cautioned that these forward-looking statements are only predictions and may differ materially from actual future events or results due a variety of factors, including, among other things, the potential impact on the business of MindMeld due to the uncertainty about the acquisition, the retention of employees of MindMeld and the ability of Cisco to successfully integrate MindMeld and to achieve expected benefits, business and economic conditions and growth trends in the networking industry, customer markets and various geographic regions, global economic conditions and uncertainties in the geopolitical environment and other risk factors set forth in Cisco’s most recent reports on Form 10-K and Form 10-Q. Any forward-looking statements in this release are based on limited information currently available to Cisco, which is subject to change, and Cisco will not necessarily update the information.
Full article: https://telecomreseller.com/2017/05/26/cisco-completes-acquisition-of-artificial-intelligence-company-mindmeld/
Contact Information:
https://telecomreseller.com/2017/05/26/cisco-completes-acquisition-of-artificial-intelligence-company-mindmeld/

On his blog, Steve “Guggs” Guggenheimer announced Microsoft’s new AI partnership with Preferred Networks to integrate Chainer technology with Azure. Guggs recounted his experience making the announcement yesterday at de:code 2017 in Tokyo, Japan with a little help from his colleagues, Alex Kipman and Joseph Siroph.
Guggs talked about how one of his favorite parts of his job is talking with developers from countries all over the world and hear their feedback. During the de:code 2017 keynote, Guggs spoke about Microsoft’s AI strategy as well as other Microsoft developments in Japan.
One of Guggs’ key points in the keynote was the new partnership with Preferred Networks, the developer behind Chainer, a deep learning AI framework. Chainer can be used by developers across different industries to take advantage of AI and its deep learning technologies that use IoT to make business processes easier and more efficient.
Guggs noted three parts of the Microsoft and Preferred Networks partnership:

  • Technology – We will work with Preferred Networks to integrate their deep learning technology (Chainer, ChainerMN, etc) with Microsoft Azure. Chainer MN (multi-node) will have friction-free deployment on hyper-scale GPU clusters to reduce training time for deep neural networks. In addition, we will help make Chainer work great on Windows and in SQL Server 2017 Machine Learning features.
  • Education – We will work together to create deep learning training from beginner to advanced, in order to grow the base of developers with expertise on deep learning technologies. The advanced classes will teach how to apply practical deep learning to actual business cases.
  • Collaboration – Working together, we will drive the practical use of deep learning in the market to make digital transformation happen in several industries. We will establish a partner ecosystem to do artificial intelligence consulting and deep learning implementations that solve business challenges by utilizing deep learning.
  • With its partnership with Preferred Networks, Microsoft wants to help developers use Chainer to create better AI-powered applications to help manage and solve enterprise users’ biggest issues. The entire de:code 2017 event in Tokyo, Japan is available on Channel 9.

    Full article: https://www.onmsft.com/news/microsoft-announces-ai-partnership-with-preferred-networks-to-integrate-deep-learning-chainer-technology-into-azure
    Contact Information:

    Microsoft announces AI partnership with Preferred Networks to integrate deep learning Chainer technology into Azure

    A future with highways full of self-driving cars or robot friends that can actually hold a decent conversation may not be far away. That’s because we’re living in the middle of an “artificial intelligence boom” — a time when machines are becoming more and more like the human brain.
    That’s partly because of an emerging subcategory of AI called “deep learning.”
    It’s a process that’s often trying to mimic the human brain’s neocortex, which helps humans with language processing, sensory perception and other functions.
    Essentially, deep learning is when machines figure out how to recognize objects. It’s often used to help a self-driving car see a nearby pedestrian, or to let Facebook know that there’s a human face in a photo. And despite only catching on in recent years, researchers have already applied deep learning in ways that could change our world.
    Here are some examples of what it can do now:

    Understanding the Earth’s trees

    Source: Justin Sullivan/Getty Images
    NASA’s Earth Exchange used a deep learning algorithm on satellite images to figure out the amount of land covered by trees across the United States. That information could improve scientists’ understanding of American ecosystems, plus how to most accurately model climate change’s effects on our planet.
    NASA essentially uses deep learning to fix a specific problem. As more and more data becomes available, it also becomes harder (and more time-consuming) for scientists to interpret it. Files used in the project could be several petabytes in size. To put that in perspective, one petabyte is equivalent to 1,000 terabytes, and a one-terabyte hard drive could hold about 500 hours of movies or 17,000 hours of music.
    “We do large-scale science. We’re talking about big data,” said Sangram Ganguly, a senior research scientist at NASA Ames Research Center and the BAER Institute. “As the data is increasing, there’s a need to merge some [conventional] physics-based models with machine learning models.”

    Translating the world

    The Google Translate App’s ability to translate text from photos comes out of deep learning techniques. The app is able to recognize individual letters or characters in an image, recognize the word those letters make, then look up the translation.

    Source: YouTube

    This process is more complicated than it seems — often, words appear in the real world under messier conditions than simple fonts on a computer. They’re frequently “marred by reflections, dirt, smudges and all kinds of weirdness,” as Google put it on their research blog. Therefore, their software had to also learn “dirty” depictions of text, a process that requires a robust database of photos to reference.

    Teaching robots how to understand human life

    Sanja Fidler, an assistant professor at the University of Toronto, says that she’s working on “cognitive agents” that “perceive the world like humans do” and “communicate with other humans in the environment.” But instead of focusing on the humanoid hardware of a robot, she interested in building the robot’s perception of our world.
    In the future, if robots and humans ever coexist, robots “need to learn about objects and simple actions,” Fidler said. “You want the robot to understand how people are actually feeling and what they could be thinking.”
    Fidler executes deep learning techniques on “robots” by taking data from pop culture. She trains the software-based mind of that “robot” with famous texts like the Harry Potter book series, then introduces clips of movies and entire movie scripts based on those books. According to Fidler, an average movie has about 200,000 frames of information, while the average book has about 150,000 words. Though this kind of research is still in its early stages, combining that information helps robots learn new concepts and process the real world.
    o prove her case, Fidler showed a movie clip that robots automatically matched to text from the Harry Potter series. By picking up on words like “candy shop” and “white linen sheets,” the robot recognized a scene where Harry comes to in a hospital and discovers jars of treats. In other words, the robot is able to understand the human world enough to match closely tied visuals to words, thanks to the wonders of deep learning.
    More is likely to come.
    Full article: https://mic.com/articles/178155/the-artificial-intelligence-boom-is-here-heres-how-it-could-change-the-world-around-us#.b9Qg1o83i
    Contact Information:
    https://mic.com/articles/178155/the-artificial-intelligence-boom-is-here-heres-how-it-could-change-the-world-around-us#.b9Qg1o83i

    When you hear the term “artificial intelligence,” you may think of tech giants Amazon, Google, IBM, Microsoft, or Facebook. Industrial powerhouse General Electric is now aiming to be included on that short list. It may not have a chipper digital assistant like Cortana or Alexa. It won’t sort through selfies, but it will look through X-rays. It won’t recommend movies, but it will suggest how to care for a diesel locomotive. Today, GE announced a pair of acquisitions and new services that will bring machine learning AI to the kinds of products it’s known for, including planes, trains, X-ray machines, and power plants.
    The effort started in 2015 when GE announced Predix Cloud—an online platform to network and collect data from sensors on industrial machinery such as gas turbines or windmills. At the time, GE touted the benefits of using machine learning to find patterns in sensor data that could lead to energy savings or preventative maintenance before a breakdown. Predix Cloud opened up to customers in February, but GE is still building up the AI capabilities to fulfill the promise. “We were using machine learning, but I would call it in a custom way,” says Bill Ruh, GE’s chief digital officer and CEO of its GE Digital business (GE calls its division heads CEOs). “And we hadn’t gotten to a general-purpose framework in machine learning.”
    Today GE revealed the purchase of two AI companies that Ruh says will get them there. Bit Stew Systems, founded in 2005, was already doing much of what Predix Cloud promises—collecting and analyzing sensor data from power utilities, oil and gas companies, aviation, and factories. (GE Ventures has funded the company.) Customers include BC Hydro, Pacific Gas & Electric, and Scottish & Southern Energy.
    The second purchase, Wise.io is a less obvious purchase. Founded by astrophysics and AI experts using machine learning to study the heavens, the company reapplied the tech to streamlining a company’s customer support systems, picking up clients like Pinterest, Twilio, and TaskRabbit. GE believes the technology will transfer yet again, to managing industrial machines. “I think by the middle of next year we will have a full machine learning stack,” says Ruh.

    GE Wants to Give Industrial Machines Their Own Social Network With Predix Cloud

    GE is selling a new service that promises to predict when a machine will break down…so technicians can preemptively fix it.
    GE also just bought ServiceMax, maker of software for dispatching field service agents, for a cool $915 million. “It’s a unicorn, essentially,” he says.
    Though young, Predix is growing fast, with 270 partner companies using the platform, according to GE, which expects revenue on software and services to grow over 25% this year, to more than $7 billion. Ruh calls Predix a “significant part” of that extra money. And he’s ready to brag, taking a jab at IBM Watson for being a “general-purpose” machine-learning provider without the deep knowledge of the industries it serves. “We have domain algorithms, on machine learning, that’ll know what a power plant is and all the depth of that, that a general-purpose machine learning will never really understand,” he says.
    GE Healthcare lobbed a shot across IBM’s bow in announcing a deal with UCSF to develop AI-driven apps for the care of various diseases. (See my colleague Chrissy Farr’s full coverage here) Health is the first industry IBM Watson targeted (starting with years of studying cancer data), and UCSF is a nice prize.
    Not synonymous with health is another new GE customer, McDonald’s, which is using a Predix service called Current to lower energy costs on items like heating and lighting. Most of GE’s new offerings are in the energy realm, such as Digital Substation, which promises to use monitoring and AI to reduce downtime for power companies.
    One especially dull-sounding new Predix service—Predictive Corrosion Management—touches on a very hot political issue: giant oil and gas pipeline projects. Over 400 people have been arrested in months of protests against the Dakota Access Pipeline, which would carry crude oil from North Dakota to Illinois. The issue is very complicated, but one concern of protestors is that a pipeline rupture would contaminate drinking water for the Standing Rock Sioux reservation.
    “I think absolutely this is aimed at that problem. If you look at why pipelines spill, it’s corrosion,” says Ruh. “We believe that 10 years from now, we can detect a leak before it occurs and fix it before you see it happen.” Given how political battles over pipelines drag on, 10 years might not be so long to wait.

    By Sean Captain
    See full article https://www.fastcompany.com/3065692/ge-wants-to-be-the-next-artificial-intelligence-powerhouse
    Contact Information:
    https://www.fastcompany.com/3065692/ge-wants-to-be-the-next-artificial-intelligence-powerhouse

    REDMOND, Washington — Microsoft Corp. announced it has formed the Microsoft AI and Research Group, bringing together Microsoft’s world-class research organization with more than 5,000 computer scientists and engineers focused on the company’s AI product efforts. The new group will be led by computer vision luminary Harry Shum, a 20-year Microsoft veteran whose career has spanned leadership roles across Microsoft Research and Bing engineering.

    Microsoft is dedicated to democratizing AI for every person and organization, making it more accessible and valuable to everyone and ultimately enabling new ways to solve some of society’s toughest challenges. Today’s announcement builds on the company’s deep focus on AI and will accelerate the delivery of new capabilities to customers across agents, apps, services and infrastructure.
    In addition to Shum’s existing leadership team, several of the company’s engineering leaders and teams will join the newly formed group including Information Platform, Cortana and Bing, and Ambient Computing and Robotics teams led by David Ku, Derrick Connell and Vijay Mital, respectively. All combined, the Microsoft AI and Research Group will encompass AI product engineering, basic and applied research labs, and New Experiences and Technologies (NExT).
    “We live in a time when digital technology is transforming our lives, businesses and the world, but also generating an exponential growth in data and information,” said Satya Nadella, CEO, Microsoft. “At Microsoft, we are focused on empowering both people and organizations, by democratizing access to intelligence to help solve our most pressing challenges. To do this, we are infusing AI into everything we deliver across our computing platforms and experiences.”
    “Microsoft has been working in artificial intelligence since the beginning of Microsoft Research, and yet we’ve only begun to scratch the surface of what’s possible,” said Shum, executive vice president of the Microsoft AI and Research Group. “Today’s move signifies Microsoft’s commitment to deploying intelligent technology and democratizing AI in a way that changes our lives and the world around us for the better. We will significantly expand our efforts to empower people and organizations to achieve more with our tools, our software and services, and our powerful, global-scale cloud computing capabilities.”
    Microsoft is taking a four-pronged approach to its initiative to democratize AI:

  • Agents. Harness AI to fundamentally change human and computer interaction through agents such as Microsoft’s digital personal assistant Cortana
    Applications. Infuse every application, from the photo app on people’s phones to Skype and Office 365, with intelligence
  • Services. Make these same intelligent capabilities that are infused in Microsoft’s apps —cognitive capabilities such as vision and speech, and machine analytics — available to every application developer in the world
  • Infrastructure. Build the world’s most powerful AI supercomputer with Azure and make it available to anyone, to enable people and organizations to harness its power
    More information about this approach can be found here.
  • For 25 years, Microsoft Research has contributed to advancing the state-of-the-art of computing through its groundbreaking basic and applied research that has been shared openly with the industry and academic communities, and with product groups within Microsoft. The organization has contributed innovative technologies to nearly every product and service Microsoft has produced in this timeframe, from Office and Xbox to HoloLens and Windows. More recently, Shum has expanded the organization’s mission to include the incubation of disruptive technologies and new businesses.
    “My job has been to take Microsoft Research, an amazing asset for the company, and make it even more of a value-creation engine for Microsoft and our industry,” Shum said. “Today’s move to bring research and engineering even closer will accelerate our ability to deliver more personal and intelligent computing experiences to people and organizations worldwide.”
    The Microsoft AI and Research Group is hiring for positions in its labs and offices worldwide. More information can be found at https://www.microsoft.com/en-us/research/careers/.
    Microsoft (Nasdaq “MSFT” @microsoft) is the leading platform and productivity company for the mobile-first, cloud-first world, and its mission is to empower every person and every organization on the planet to achieve more.
    Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.
    Read more at https://news.microsoft.com/2016/09/29/microsoft-expands-artificial-intelligence-ai-efforts-with-creation-of-new-microsoft-ai-and-research-group/#Jgl2ELuBVBqlIrCK.99

    Read more at https://news.microsoft.com/2016/09/29/microsoft-expands-artificial-intelligence-ai-efforts-with-creation-of-new-microsoft-ai-and-research-group/#Jgl2ELuBVBqlIrCK.99
    Contact Information:

    Microsoft Public Relations Contacts

    The Mirai is Toyota’s car of the future. It runs on hydrogen fuel cells, gets 312 miles on a full tank and only emits water vapor. So, to target tech and science enthusiasts, the brand is running thousands of ads with messaging crafted based on their interests.

    The catch? The campaign was written by IBM’s supercomputer, Watson. After spending two to three months training the AI to piece together coherent sentences and phrases, Saatchi LA began rolling out a campaign last week on Facebook called “Thousands of Ways to Say Yes” that pitches the car through short video clips.
    “When we started to look at the people who were going to be interested in this car, we realized it was people who were new technology adopters,—it was psychologists; it was engineers,” said Chris Pierantozzi, ecd at Saatchi LA. “So we wanted to make an ad for almost every single potential buyer of this car—one for every type of Mirai driver.”
    First, Saatchi LA wrote 50 scripts based on location, behavioral insights and occupation data that explained the car’s features to set up a structure for the campaign. The scripts were then used to train Watson so it could whip up thousands of pieces of copy that sounded like they were written by humans.
    “We realized that we couldn’t just let it go out and try to figure out the creative on its own,” Pierantozzi said. “We had to give it guidelines with exactly what we wanted, so then it then had a little bit of creative freedom to come up with some of the thoughts on its own.”
    The AI then scraped the internet, including sites like YouTube and Wikipedia, to create a neural network of words and phrases about what it means to be a scientist and put the thoughts together into sentences.
    Getting the AI to string together a proper sentence was a challenge at first, Pierantozzi said. The first piece of copy it came back with was the incoherent sentence, “Yes, probably in this out we saw can I realized in college to cook applied into the.” Another early ad read, “Yes, the worried scientist mother there.”
    “AI was a core expectation of the project from the get-go but the undertaking was massive given that we needed a ton of lines and 42 brains to work together to create the specific copy to feed into the wild,” explained Chris Neff, executive producer of Tool of North America, the production company behind the work. “AI production is time consuming and requires an open mind. You may not get the results that you think you will get and that expectation has to be set from day one.”
    After a few more attempts, “We realized that it was struggling with the words that it had learned to create cohesive sentences,” Saatchi LA’s Pierantozzi said. “[We] allowed it to take all of the words within the neural net and apply them to things like nouns, adjectives, pronouns, phrases.”
    About halfway through the process, Watson began putting together sentences, but they weren’t connected to each other. So, the last group of training sessions focused on word meaning and cohesively using words together. During this phase, Saatchi LA began to rank the AI responses to train the technology on whether something was “good” or “bad.”
    During the ninth training session—about two and half months into the project—Watson wrote a piece that read, “Yes, it’s for fans of possibility.” Then it spat out an ad referencing Carbon-14, a radioactive isotope used in radiocarbon dating that fit with the campaign’s target audience of scientists. Something had finally clicked. “It was finding these very deep, weird insights that it mined out and it started to put it into sentence structures—things that maybe we necessarily in the advertising industry wouldn’t be able to find on our own,” Pierantozzi explained.
    The final version of creative included lines like, “Yes, it’s Mother Nature approved,” “Yes, this car is an ode to tech” and “Yes, the future is available now.” Other phrases in the campaign include, “Yes, it’s a better future for insects,” “Yes, it will turn heads on the moon” and “Yes, it drives like a planet.”
    Creative was ultimately vetted by Saatchi LA’s copywriters before it was pushed out.
    “At the very end, we would still go through all of the lines and make sure that what was being out live, we were evaluating the content that the AI was writing for,” Pierantozzi said.
    By Lauren Johnson
    Full article: http://www.adweek.com/digital/saatchi-la-trained-ibm-watson-to-write-thousands-of-ads-for-toyota/
    Contact Information:

    Saatchi LA Trained IBM Watson to Write Thousands of Ads for Toyota

    A new tool using International Business Machines Corp.’s Watson, notorious
    for defeating its human competitors on “Jeopardy” in 2011, is hard at work for
    in-house legal departments with the goal of significantly reducing outside
    counsel spend.

    So far, use of “Outside Counsel Insights,” or OCI, has been limited to legal
    departments in the financial services industry, according to Brian Kuhn, co-
    founder and leader of IBM Watson Legal. But with the potential to save as
    much as 30 percent on annual outside counsel spend, it’s no surprise that the
    tool has piqued the interest of some of the largest companies in that field.
    The service relies on cloud-based cognitive computing system Watson to
    reveal billing insights to in-house legal departments, Kuhn said. The
    development of OCI, which became an official offering late last quarter,
    stemmed from the perceived desire of legal departments to get their arms
    around this line item on the budget, he added.
    “Outside counsel spend is really a significant concern for corporate general
    counsel,” Kuhn said, noting that “on average, corporate law departments
    spend one-third to 50 percent of their annual budget on outside counsel.”
    To cut down on these costs, OCI looks at the amount of time a lawyer spends
    on a task and at line item descriptions in a budget, for instance, and creates a
    nearly complete automation of the invoice review process, Kuhn said. The tool
    also shows how outside counsel are working, he added, which “tells you not
    just what lawyers did, but in a very granular way, the order in which they did
    things.”

    Together, these two features will help facilitate fixed-fee pricing, according to
    Kuhn, because legal departments will have a very detailed understanding of
    the work being done by outside counsel and can then dictate price. What’s
    more, Kuhn said, a future capability of OCI is to extract insights, such as how
    a judge ruled on certain motions and how specific lawyers perform on cross-
    examination, in order to “enforce appropriate legal strategy based on the
    outside counsel who’ve worked for you.”

    “There are other tools out there on the marketplace that offer just a pure
    analytics approach and they can only parse structured data,” Kuhn said,
    explaining what makes Outside Counsel Insights unique. “What Watson’s
    good at is actually reading like a person, reading language … and the ability
    to take narrative descriptions of legal tasks and time entries and understand
    what a lawyer actually intends by that in the context of a company’s billing
    guidelines, is really how we move the needle and how we use AI.”

    While OCI is currently only used in legal departments in the financial services
    industry, the potential savings are far from insignificant, Kuhn said. He pointed
    out that in just the one industry, IBM’s analysis shows that the service can
    provide a 22 to 30 percent savings on annual outside counsel spend after two
    years. In one company with over $1 billion on annual outside counsel spend,
    which IBM declined to identify as the financial services company does not give
    its name as a reference, Kuhn said the benefits case showed close to $400
    million a year in savings after two years.

    There are also plans to expand use of the tool in the future to other industries
    that rely heavily on outside counsel, according to Kuhn.

    “This is definitely not a sexy use of Watson,” he noted. “It’s about creating
    efficiency for the lawyers and it’s about massively reducing outside counsel
    spend.”