LexisNexis Announces First Five Legal Tech Accelerator Participants

LexisNexis today announced the first five participants in its new Silicon Valley legal tech accelerator program, which was created to give startups a leg up in the rapidly expanding legal tech industry. In line with LexisNexis’ broader vision to transform the way law is practiced, each of the accelerator participants is uniquely innovating in distinct areas of the law. After a thorough evalution process, the five finalists – Visabot, TagDox, Separate.us, Ping, and JuriLytics – were selected from a list of 40+ promising startups for the interesting nature of their businesses and their innovative use of technology.
Based in the Menlo Park, CA offices of Lex Machina™, the program will leverage the vast content resources, deep expertise in legal, technology, and startup domains, and industry-leading market positions of LexisNexis and Lex Machina to guide and mentor program participants. The program will be led by Lex Machina CEO Josh Becker with support from LexisNexis’ Chief Technology Officer, Jeff Reihl, Chief Product Officer, Jamie Buckley, Vice President of US Product Management, Jeff Pfeifer, and Lex Machina Chief Evangelist, Owen Byrd.
“We’re very excited to kick off our tech accelerator with five incredible and promising startups – Visabot, TagDox, Separate.us, Ping, and Jurilytics – and look forward to providing them with the practical guidance and industry expertise they need to advance their businesses,” said Jeff Pfeifer. “The goal of our tech accelerator program is to identify some of the best and brightest legal tech startups, contribute to their early success, and then watch as their innovative technologies and vision transform the business and practice of law.”
The five charter members of the LexisNexis legal tech accelerator program are:

  • Visabot: An “immigration robot” powered by artificial intelligence that helps customers complete U.S. visa applications, including locating relevant open data about an applicant, guiding applicants in the process of gathering supporting documents, ensuring forms are filled out accurately, and drafting appropriate language to tell the applicant’s story.
  • TagDox: A legal document analysis tool that creates tags, allowing users to identify and structure information in a variety of document types, improving both the speed and the quality of the document review process; “tag results” can transform documents into easily readable summaries, checklists, database feeds or approval overviews.
  • us: A web-based application that automates legal document preparation for divorces and provides access to relevant professionals at affordable fixed rates, deploying a business model that targets both B2B and B2C customers.
  • Ping: An automated timekeeping application that collects all of a lawyer’s billable hours, capturing missed time and money (an estimated 20% across the industry), and operating entirely in the background in concert with standard legal billing software.
  • JuriLytics: An expert witness peer review service that attorneys can use to challenge their opponent’s experts with previously unobtainable credibility and bullet-proof their own expert’s work through vetting from the world’s top researchers (in any field of expertise).
  • Throughout the rigorous, 12-week curriculum, tech accelerator participants will gain knowledge and expertise in a variety of topics including technology and product development; running an agile product development organization; building a strong company culture; selling to legal departments and law firms; leveraging legal data; and best practices in customer success, marketing and fundraising. In addition, they will have access to a vast collection of enriched legal data and cutting-edge tools and technologies from LexisNexis, and will be able to leverage the company’s established relationships with Stanford University and other leading Bay Area schools, businesses, VCs and influencers to grow their companies.
    “The LexisNexis legal tech accelerator is a promising initiative,” said Miriam Rivera, Managing Partner at Ulu Ventures and an advisor at the Venture Capital Director’s College, a part of The Rock Center for Corporate Governance at Stanford University. “As a legal tech investor and former Deputy GC of Google responsible for expanding the use of legal technology throughout the department, I am convinced the LexisNexis tech accelerator will not only foster innovation but also encourage new companies to thrive with sound business practices.”
    For more information, or to apply to the tech accelerator program, please email Alex Oh ([email protected]).
    About LexisNexis® Legal & Professional
    LexisNexis Legal & Professional is a leading global provider of content and technology solutions that enable professionals in legal, corporate, tax, government, academic and non-profit organizations to make informed decisions and achieve better business outcomes. As a digital pioneer, the company was the first to bring legal and business information online with its Lexis® and Nexis® services. Today, LexisNexis Legal & Professional harnesses leading edge technology and world class content to help professionals work in faster, easier and more effective ways. Through close collaboration with its customers, the company ensures organizations can leverage its solutions to reduce risk, improve productivity, increase profitability and grow their business. LexisNexis Legal & Professional, which serves customers in more than 175 countries with 10,000 employees worldwide, is part of RELX Group plc, a world leading global provider of information and analytics solutions for professional and business customers across industries.
    About Lex Machina
    Lex Machina’s award-winning Legal Analytics® platform is a new category of legal technology that fundamentally changes how companies and law firms compete in the business and practice of law. Delivered as Software-as a-Service, Lex Machina provides strategic insights on judges, lawyers, parties, and more, mined from millions of pages of legal information. This allows law firms and companies to predict the behaviors and outcomes that different legal strategies will produce, enabling them to win cases and close business.
    Lex Machina was named “Best Legal Analytics” by readers of The Recorder in 2014, 2015 and 2016, and received the “Best New Product of the Year” award in 2015 from the American Association of Law Libraries.
    Based in Silicon Valley, Lex Machina is part of LexisNexis, a leading information provider and a pioneer in delivering trusted legal content and insights through innovative research and productivity solutions, supporting the needs of legal professionals at every step of their workflow. By harnessing the power of Big Data, LexisNexis provides legal professionals with essential information and insights derived from an unmatched collection of legal and news content—fueling productivity, confidence, and better outcomes. For more information, please visit www.lexmachina.com.
    Contact Information:
    www.lexmachina.com

    HSBC partners with Artificial Intelligence startup to combat money laundering

    HSBC Holdings Plc has partnered with Silicon Valley-based artificial intelligence startup Ayasdi Inc to automate some of its compliance processes in a bid to become more efficient.
    The banking group is implementing the company’s AI technology to automate anti money-laundering investigations that have traditionally been conducted by thousands of humans, the bank’s Chief Operating Officer Andy Maguire said in an interview last week.
    The vast majority of anti money-laundering investigations at banks do not find suspicious activity, resulting in a waste of resources, according to the startup.
    In a pilot of Ayasdi’s technology, HSBC saw the number of investigations drop by 20 percent without reducing the number of cases referred for more scrutiny, according to the startup.
    “It’s a win-win,” Maguire said. “We reduce risks and it costs less money.”
    Banks have been ramping up their use of AI and automation over the past year to save money and time on cumbersome and manual processes ranging from compliance checks to customer service
    At the same time they have been working more closely with young financial technology companies and reducing the amount of technology that gets built in-house.
    One of the financial technology areas that has seen significant collaboration has been so-called “regtech”, or technology that can help financial institutions stay compliant with regulations and avoid hefty fines in areas such as money laundering or market manipulation.
    In 2012 HSBC agreed to pay a $1.92 billion in fines to U.S. authorities for allowing itself to be used to launder drug money out of Mexico and other compliance lapses.
    To cope with increased regulatory scrutiny and a swathe of new rules, banks went on a compliance hiring spree in the years following the financial crisis.
    Anti-money laundering checks “is a thing that the whole industry has thrown a lot of bodies at it because that was the way it was being done,” Maguire said.
    Banks have recently started cutting back on compliance hiring as they start deploying new technology that can help automate some of the tasks.
    Maguire said AI technology can help with compliance because it has the ability “to do things human beings are not typically good at like high frequency high volume data problems” or augment human capabilities.
    Ayasdi’s Executive Chairman and co-founder Gurjeet Singh was in January appointed to HSBC’s new technology advisory board, which provides advice and guidance to the bank on digital strategy.
    Full article: http://www.reuters.com/article/us-hsbc-ai-idUSKBN18S4M5?utm_source=applenews
    (Reporting by Anna Irrera; Editing by Lisa Shumaker)
    Contact Information:
    http://www.reuters.com/article/us-hsbc-ai-idUSKBN18S4M5?utm_source=applenews

    AI Weekly: Google shifts from mobile-first to AI-first world

    Here’s this week’s newsletter:

    “An important shift from a mobile first world to an AI first world,” declared Google CEO Sundar Pichai, summarizing the Google I/O 2017 keynote yesterday. His description of the changes underway at his company apply to nearly every business today.
    Almost all of Google’s announcements touched on AI in one way or another. From introducing a second generation of TPU chips to accelerate deep learning for such applications as cancer research and DNA sequencing, to a broad effort to get Google Home on as many screens and devices as possible. (As I wrote last week, your living room is the next battle ground for intelligent assistants.) The company also shared that it’s speech recognition technology was now better than 95 percent accurate.
    It’s all about a transition from “searching and organizing the world’s information to AI and machine learning,” Pichai said.
    Is such a change underway at your company, too? If so, drop me a line and tell me what’s afoot.
    For AI coverage, send news tips to Khari Johnson and guest post submissions to John Brandon. Please be sure to visit our AI Channel.
    Thanks for reading,
    Blaise Zerega
    Editor in Chief

    Full article: https://venturebeat.com/2017/05/18/ai-weekly-google-shifts-from-mobile-first-to-ai-first-world/
    Contact Information:

    AI Weekly: Google shifts from mobile-first to AI-first world

    Acquisitions accelerate as tech giants seek to build Artificial Intelligence smarts

    A total of 34 artificial intelligence startups were acquired in the first quarter of this year, more than twice the amount of activity in the year-ago quarter, according to the research firm CB Insights.
    Tech giants seeking to reinforce their leads in artificial intelligence or make up for lost ground have been the most aggressive buyers. Alphabet Inc’s (GOOGL.O) Google has acquired 11 AI startups since 2012, the most of any firm, followed by Apple Inc (AAPL.O), Facebook Inc (FB.O) and Intel Corp (INTC.O), respectively, according to CB Insights.
    The companies declined to comment on their acquisition strategies. A spokesman for Apple did confirm the company’s recent purchase of Lattice Data, a startup that specializes in working with unstructured data.
    The first quarter also saw one of the largest deals to date as Ford Motor Co (F.N) invested $1 billion in Argo AI, founded by former executives on self-driving teams at Google and Uber Technologies Inc [UBER.UL].
    Startups are looking to go deep on applications of artificial intelligence to specific fields, such as health and retail, industry observers say, rather than compete directly with established companies.
    “What you will see is very big players will build platform services, and startup communities will migrate more to applied intelligent apps,” said Matt McIlwain, managing director of Madrona Venture Group.
    Healthcare startup Forward, for example, is using artificial intelligence to crunch data that can inform doctors’ recommendations.
    “For people who really want to focus on core AI problems, it makes a lot of sense to be in bigger companies,” said Forward Chief Executive Officer Adrian Aoun, who previously worked at Google. “But for folks who really want to prove a new field, a new area, it makes more sense to be separate.”
    Artificial intelligence companies that do remain independent field a steady stream of suitors: Matthew Zeiler, chief executive of Clarifai, which specializes in image and video recognition, said he has been approached about a dozen times by prospective acquirers since starting the company in late 2013.
    Clarifai’s pitch to customers such as consumer goods company Unilever Plc (ULVR.L) and hotel search firm Trivago is bolstered by its narrow focus on artificial intelligence.
    “(Google) literally competes with almost every company on the planet,” Zeiler said. “Are you going to trust them with being your partner for AI?”
    Tech giants have been locked in a bidding war for academics specializing in artificial intelligence. Startups rarely have the capital to compete, but a company with a specialized mission can win over recruits, said Vic Gundotra, chief executive of AliveCor, which makes an AI-driven portable heart monitor.
    “They say, ‘I want to come here and work on a project that might save my mother’s life,’” Gundotra said.

    (Reporting by Julia Love; Editing by Jonathan Weber and Lisa Shumaker)

    http://www.reuters.com/article/us-tech-startups-ai-idUSKBN18M2ED?utm_source=applenews

    Contact Information:
    http://www.reuters.com/article/us-tech-startups-ai-idUSKBN18M2ED?utm_source=applenews

    Apple working on AI chip for mobile devices bringing them a new level of intelligence

    Apple is reportedly working on a chip called the Apple Neural Engine, which would be dedicated to carrying out artificial intelligence (AI) processing on its mobile devices. The addition of this type of capability would catalyse the use of AI on mobile devices. Although artificial intelligence is being used extensively already to power digital assistants like Siri and Google Assistant, these technologies rely on computer servers to process data sent to them rather than the processing happening on the mobile device itself.
    Augmented reality and digital assistants are not the only applications of AI that will become important on mobile devices. Once the capability is made available to all mobile application developers, it will bring new types of capabilities to mobile devices. Health applications for example will be able to tell when body readings from sensors on the phone or associated wearable devices are abnormal and need acting on. But the uses are potentially limitless and will bring about a new phase in how we rely on applications and our mobile devices in everyday life. And they will work, even when the device is not connected to the Internet.
    Strictly speaking, a specific processor is not an essential requirement for using AI on a mobile phone. Chip-maker Qualcomm for example has provided a software-based approach called the Snapdragon Neural Processing Engine, to allow developers using their chips to incorporate AI into their software. An example of this is a car that monitors the driver using a camera and warns when they are using their smartphone or driving erratically.
    Specific AI hardware however greatly speeds up the process called “machine learning” and allows for more sophisticated types of AI to be used. Google’s AI hardware, called the Tensor Processing Unit, is 15 to 30 times faster than the fastest computer processors (CPUs) and graphic processors (GPUs) that power computers today. These TPUs were what gave Google’s DeepMind its ability to beat the world champions of the Chinese game of Go. These TPUs also have vastly improved Google’s automated language translation software, Google Translate.
    The inclusion of AI in mobile software is going to massively increase the potential usefulness of that software and through that, how much we come to depend on the mobile phone. Our state of health, for example, is really about how we are doing relative to how we normally feel. Changes in behaviour can signal changes in mental health, including conditions like dementia and Parkinson’s, to precursors of illnesses such as diabetes, respiratory and cardiovascular diseases. Our phones could monitor patterns of activity and even how we walk. This ability would be based on the software learning our normal patterns and once having detected a change, decide what to do about it.
    The phone would be part of a self-directed ecosystem of intelligent and autonomous machines, including cars. Not only is the driving of autonomous cars completely dependent on AI to function, it is likely that people will eventually share the use of these cars when needed, rather than owning one themselves. AI will again be essential for managing how this sharing functions to manage the most efficient distribution of cars, directing which cars need to pick up which clients. To do this, the scheduling AI service will need to liaise with AI software on everyone’s phones to determine where and when they will be at a given location and where they need to get to.
    AI on a mobile device will also increasingly be used to keep the device protected, checking if applications and communications are secure or likely to be a threat. This technology is already being implemented in smart home appliances, but as software. The addition of special AI chips will allow them to be much faster and to do more. Researchers are also looking at analysing the way we move as a means of uniquely identifying the wearer of a device.
    AI will essentially be able to fill in for activities and the application of understanding and knowledge that not everyone possesses. Even if they do, remembering to do something, even when it is in your own best interest, is sometimes hard.
    There is a counter argument to the benefits of increasing the intelligence of mobile devices however. This is the fear that as we come to rely on devices to do things, we will lose the ability to maintain that skill and that this will eventually impact on a person’s overall cognitive ability, or at least on their ability to operate without the AI. The entire successful outcome of having an AI assisting depends on the user following the advice, and this is something that people may not be that good at doing.
    The Conversation
    Full Article: https://www.australasianscience.com.au/article/science-and-technology/apple-working-ai-chip-mobile-devices-bringing-them-new-level-intellig
    Disclosure
    David Glance owns shares in Apple
    Contact Information:
    https://www.australasianscience.com.au/article/science-and-technology/apple-working-ai-chip-mobile-devices-bringing-them-new-level-intellig

    Cisco Completes Acquisition of Artificial Intelligence Company MindMeld

    Cisco (NASDAQ: CSCO) announced today it has completed the acquisition of MindMeld Inc., a privately held artificial intelligence (AI) company based in San Francisco. MindMeld has pioneered the development of a unique AI platform that enables customers to build intelligent and human-like conversational interfaces for any application or device. Through its proprietary machine learning (ML) technology, MindMeld delivers incredible accuracy to help users interact with voice and chat assistants in a more natural way.
    As chat and voice quickly become the interfaces of choice, MindMeld’s AI technology will enable Cisco to deliver unique experiences throughout its portfolio, starting with collaboration. This acquisition will power new conversational interfaces for Cisco’s collaboration products, revolutionizing how users will interact with our technology, increasing ease of use, and enabling new cognitive capabilities.
    The MindMeld team joins the Cloud Collaboration group under the leadership of Jens Meggers, senior vice president and general manager, as the Cognitive Collaboration team.
    Cisco acquired MindMeld for $125 million in cash and assumed equity awards.
    More at http://www.cisco.com/.
    Forward-Looking Statements
    This press release may be deemed to contain forward-looking statements, which are subject to the safe harbor provisions of the Private Securities Litigation Reform Act of 1995, including statements regarding the acquisition powering new conversational interfaces for Cisco’s collaboration products, revolutionizing how our users will interact with our technology, increasing ease of use, and enabling new cognitive capabilities, the expected benefits to Cisco and its customers from completing the acquisition, and plans regarding MindMeld personnel. Readers are cautioned that these forward-looking statements are only predictions and may differ materially from actual future events or results due a variety of factors, including, among other things, the potential impact on the business of MindMeld due to the uncertainty about the acquisition, the retention of employees of MindMeld and the ability of Cisco to successfully integrate MindMeld and to achieve expected benefits, business and economic conditions and growth trends in the networking industry, customer markets and various geographic regions, global economic conditions and uncertainties in the geopolitical environment and other risk factors set forth in Cisco’s most recent reports on Form 10-K and Form 10-Q. Any forward-looking statements in this release are based on limited information currently available to Cisco, which is subject to change, and Cisco will not necessarily update the information.
    Full article: https://telecomreseller.com/2017/05/26/cisco-completes-acquisition-of-artificial-intelligence-company-mindmeld/
    Contact Information:
    https://telecomreseller.com/2017/05/26/cisco-completes-acquisition-of-artificial-intelligence-company-mindmeld/

    Microsoft announces AI partnership with Preferred Networks to integrate deep learning Chainer technology into Azure

    On his blog, Steve “Guggs” Guggenheimer announced Microsoft’s new AI partnership with Preferred Networks to integrate Chainer technology with Azure. Guggs recounted his experience making the announcement yesterday at de:code 2017 in Tokyo, Japan with a little help from his colleagues, Alex Kipman and Joseph Siroph.
    Guggs talked about how one of his favorite parts of his job is talking with developers from countries all over the world and hear their feedback. During the de:code 2017 keynote, Guggs spoke about Microsoft’s AI strategy as well as other Microsoft developments in Japan.
    One of Guggs’ key points in the keynote was the new partnership with Preferred Networks, the developer behind Chainer, a deep learning AI framework. Chainer can be used by developers across different industries to take advantage of AI and its deep learning technologies that use IoT to make business processes easier and more efficient.
    Guggs noted three parts of the Microsoft and Preferred Networks partnership:

  • Technology – We will work with Preferred Networks to integrate their deep learning technology (Chainer, ChainerMN, etc) with Microsoft Azure. Chainer MN (multi-node) will have friction-free deployment on hyper-scale GPU clusters to reduce training time for deep neural networks. In addition, we will help make Chainer work great on Windows and in SQL Server 2017 Machine Learning features.
  • Education – We will work together to create deep learning training from beginner to advanced, in order to grow the base of developers with expertise on deep learning technologies. The advanced classes will teach how to apply practical deep learning to actual business cases.
  • Collaboration – Working together, we will drive the practical use of deep learning in the market to make digital transformation happen in several industries. We will establish a partner ecosystem to do artificial intelligence consulting and deep learning implementations that solve business challenges by utilizing deep learning.
  • With its partnership with Preferred Networks, Microsoft wants to help developers use Chainer to create better AI-powered applications to help manage and solve enterprise users’ biggest issues. The entire de:code 2017 event in Tokyo, Japan is available on Channel 9.

    Full article: https://www.onmsft.com/news/microsoft-announces-ai-partnership-with-preferred-networks-to-integrate-deep-learning-chainer-technology-into-azure
    Contact Information:

    Microsoft announces AI partnership with Preferred Networks to integrate deep learning Chainer technology into Azure

    AI disrupts translations – the Google Translate App’s ability to translate text from photos comes out of deep learning techniques

    A future with highways full of self-driving cars or robot friends that can actually hold a decent conversation may not be far away. That’s because we’re living in the middle of an “artificial intelligence boom” — a time when machines are becoming more and more like the human brain.
    That’s partly because of an emerging subcategory of AI called “deep learning.”
    It’s a process that’s often trying to mimic the human brain’s neocortex, which helps humans with language processing, sensory perception and other functions.
    Essentially, deep learning is when machines figure out how to recognize objects. It’s often used to help a self-driving car see a nearby pedestrian, or to let Facebook know that there’s a human face in a photo. And despite only catching on in recent years, researchers have already applied deep learning in ways that could change our world.
    Here are some examples of what it can do now:

    Understanding the Earth’s trees

    Source: Justin Sullivan/Getty Images
    NASA’s Earth Exchange used a deep learning algorithm on satellite images to figure out the amount of land covered by trees across the United States. That information could improve scientists’ understanding of American ecosystems, plus how to most accurately model climate change’s effects on our planet.
    NASA essentially uses deep learning to fix a specific problem. As more and more data becomes available, it also becomes harder (and more time-consuming) for scientists to interpret it. Files used in the project could be several petabytes in size. To put that in perspective, one petabyte is equivalent to 1,000 terabytes, and a one-terabyte hard drive could hold about 500 hours of movies or 17,000 hours of music.
    “We do large-scale science. We’re talking about big data,” said Sangram Ganguly, a senior research scientist at NASA Ames Research Center and the BAER Institute. “As the data is increasing, there’s a need to merge some [conventional] physics-based models with machine learning models.”

    Translating the world

    The Google Translate App’s ability to translate text from photos comes out of deep learning techniques. The app is able to recognize individual letters or characters in an image, recognize the word those letters make, then look up the translation.

    Source: YouTube

    This process is more complicated than it seems — often, words appear in the real world under messier conditions than simple fonts on a computer. They’re frequently “marred by reflections, dirt, smudges and all kinds of weirdness,” as Google put it on their research blog. Therefore, their software had to also learn “dirty” depictions of text, a process that requires a robust database of photos to reference.

    Teaching robots how to understand human life

    Sanja Fidler, an assistant professor at the University of Toronto, says that she’s working on “cognitive agents” that “perceive the world like humans do” and “communicate with other humans in the environment.” But instead of focusing on the humanoid hardware of a robot, she interested in building the robot’s perception of our world.
    In the future, if robots and humans ever coexist, robots “need to learn about objects and simple actions,” Fidler said. “You want the robot to understand how people are actually feeling and what they could be thinking.”
    Fidler executes deep learning techniques on “robots” by taking data from pop culture. She trains the software-based mind of that “robot” with famous texts like the Harry Potter book series, then introduces clips of movies and entire movie scripts based on those books. According to Fidler, an average movie has about 200,000 frames of information, while the average book has about 150,000 words. Though this kind of research is still in its early stages, combining that information helps robots learn new concepts and process the real world.
    o prove her case, Fidler showed a movie clip that robots automatically matched to text from the Harry Potter series. By picking up on words like “candy shop” and “white linen sheets,” the robot recognized a scene where Harry comes to in a hospital and discovers jars of treats. In other words, the robot is able to understand the human world enough to match closely tied visuals to words, thanks to the wonders of deep learning.
    More is likely to come.
    Full article: https://mic.com/articles/178155/the-artificial-intelligence-boom-is-here-heres-how-it-could-change-the-world-around-us#.b9Qg1o83i
    Contact Information:
    https://mic.com/articles/178155/the-artificial-intelligence-boom-is-here-heres-how-it-could-change-the-world-around-us#.b9Qg1o83i

    GE Wants To Be The Next Artificial Intelligence Powerhouse

    When you hear the term “artificial intelligence,” you may think of tech giants Amazon, Google, IBM, Microsoft, or Facebook. Industrial powerhouse General Electric is now aiming to be included on that short list. It may not have a chipper digital assistant like Cortana or Alexa. It won’t sort through selfies, but it will look through X-rays. It won’t recommend movies, but it will suggest how to care for a diesel locomotive. Today, GE announced a pair of acquisitions and new services that will bring machine learning AI to the kinds of products it’s known for, including planes, trains, X-ray machines, and power plants.
    The effort started in 2015 when GE announced Predix Cloud—an online platform to network and collect data from sensors on industrial machinery such as gas turbines or windmills. At the time, GE touted the benefits of using machine learning to find patterns in sensor data that could lead to energy savings or preventative maintenance before a breakdown. Predix Cloud opened up to customers in February, but GE is still building up the AI capabilities to fulfill the promise. “We were using machine learning, but I would call it in a custom way,” says Bill Ruh, GE’s chief digital officer and CEO of its GE Digital business (GE calls its division heads CEOs). “And we hadn’t gotten to a general-purpose framework in machine learning.”
    Today GE revealed the purchase of two AI companies that Ruh says will get them there. Bit Stew Systems, founded in 2005, was already doing much of what Predix Cloud promises—collecting and analyzing sensor data from power utilities, oil and gas companies, aviation, and factories. (GE Ventures has funded the company.) Customers include BC Hydro, Pacific Gas & Electric, and Scottish & Southern Energy.
    The second purchase, Wise.io is a less obvious purchase. Founded by astrophysics and AI experts using machine learning to study the heavens, the company reapplied the tech to streamlining a company’s customer support systems, picking up clients like Pinterest, Twilio, and TaskRabbit. GE believes the technology will transfer yet again, to managing industrial machines. “I think by the middle of next year we will have a full machine learning stack,” says Ruh.

    GE Wants to Give Industrial Machines Their Own Social Network With Predix Cloud

    GE is selling a new service that promises to predict when a machine will break down…so technicians can preemptively fix it.
    GE also just bought ServiceMax, maker of software for dispatching field service agents, for a cool $915 million. “It’s a unicorn, essentially,” he says.
    Though young, Predix is growing fast, with 270 partner companies using the platform, according to GE, which expects revenue on software and services to grow over 25% this year, to more than $7 billion. Ruh calls Predix a “significant part” of that extra money. And he’s ready to brag, taking a jab at IBM Watson for being a “general-purpose” machine-learning provider without the deep knowledge of the industries it serves. “We have domain algorithms, on machine learning, that’ll know what a power plant is and all the depth of that, that a general-purpose machine learning will never really understand,” he says.
    GE Healthcare lobbed a shot across IBM’s bow in announcing a deal with UCSF to develop AI-driven apps for the care of various diseases. (See my colleague Chrissy Farr’s full coverage here) Health is the first industry IBM Watson targeted (starting with years of studying cancer data), and UCSF is a nice prize.
    Not synonymous with health is another new GE customer, McDonald’s, which is using a Predix service called Current to lower energy costs on items like heating and lighting. Most of GE’s new offerings are in the energy realm, such as Digital Substation, which promises to use monitoring and AI to reduce downtime for power companies.
    One especially dull-sounding new Predix service—Predictive Corrosion Management—touches on a very hot political issue: giant oil and gas pipeline projects. Over 400 people have been arrested in months of protests against the Dakota Access Pipeline, which would carry crude oil from North Dakota to Illinois. The issue is very complicated, but one concern of protestors is that a pipeline rupture would contaminate drinking water for the Standing Rock Sioux reservation.
    “I think absolutely this is aimed at that problem. If you look at why pipelines spill, it’s corrosion,” says Ruh. “We believe that 10 years from now, we can detect a leak before it occurs and fix it before you see it happen.” Given how political battles over pipelines drag on, 10 years might not be so long to wait.

    By Sean Captain
    See full article https://www.fastcompany.com/3065692/ge-wants-to-be-the-next-artificial-intelligence-powerhouse
    Contact Information:
    https://www.fastcompany.com/3065692/ge-wants-to-be-the-next-artificial-intelligence-powerhouse

    Microsoft expands artificial intelligence (AI) efforts with creation of new Microsoft AI and Research Group

    REDMOND, Washington — Microsoft Corp. announced it has formed the Microsoft AI and Research Group, bringing together Microsoft’s world-class research organization with more than 5,000 computer scientists and engineers focused on the company’s AI product efforts. The new group will be led by computer vision luminary Harry Shum, a 20-year Microsoft veteran whose career has spanned leadership roles across Microsoft Research and Bing engineering.

    Microsoft is dedicated to democratizing AI for every person and organization, making it more accessible and valuable to everyone and ultimately enabling new ways to solve some of society’s toughest challenges. Today’s announcement builds on the company’s deep focus on AI and will accelerate the delivery of new capabilities to customers across agents, apps, services and infrastructure.
    In addition to Shum’s existing leadership team, several of the company’s engineering leaders and teams will join the newly formed group including Information Platform, Cortana and Bing, and Ambient Computing and Robotics teams led by David Ku, Derrick Connell and Vijay Mital, respectively. All combined, the Microsoft AI and Research Group will encompass AI product engineering, basic and applied research labs, and New Experiences and Technologies (NExT).
    “We live in a time when digital technology is transforming our lives, businesses and the world, but also generating an exponential growth in data and information,” said Satya Nadella, CEO, Microsoft. “At Microsoft, we are focused on empowering both people and organizations, by democratizing access to intelligence to help solve our most pressing challenges. To do this, we are infusing AI into everything we deliver across our computing platforms and experiences.”
    “Microsoft has been working in artificial intelligence since the beginning of Microsoft Research, and yet we’ve only begun to scratch the surface of what’s possible,” said Shum, executive vice president of the Microsoft AI and Research Group. “Today’s move signifies Microsoft’s commitment to deploying intelligent technology and democratizing AI in a way that changes our lives and the world around us for the better. We will significantly expand our efforts to empower people and organizations to achieve more with our tools, our software and services, and our powerful, global-scale cloud computing capabilities.”
    Microsoft is taking a four-pronged approach to its initiative to democratize AI:

  • Agents. Harness AI to fundamentally change human and computer interaction through agents such as Microsoft’s digital personal assistant Cortana
    Applications. Infuse every application, from the photo app on people’s phones to Skype and Office 365, with intelligence
  • Services. Make these same intelligent capabilities that are infused in Microsoft’s apps —cognitive capabilities such as vision and speech, and machine analytics — available to every application developer in the world
  • Infrastructure. Build the world’s most powerful AI supercomputer with Azure and make it available to anyone, to enable people and organizations to harness its power
    More information about this approach can be found here.
  • For 25 years, Microsoft Research has contributed to advancing the state-of-the-art of computing through its groundbreaking basic and applied research that has been shared openly with the industry and academic communities, and with product groups within Microsoft. The organization has contributed innovative technologies to nearly every product and service Microsoft has produced in this timeframe, from Office and Xbox to HoloLens and Windows. More recently, Shum has expanded the organization’s mission to include the incubation of disruptive technologies and new businesses.
    “My job has been to take Microsoft Research, an amazing asset for the company, and make it even more of a value-creation engine for Microsoft and our industry,” Shum said. “Today’s move to bring research and engineering even closer will accelerate our ability to deliver more personal and intelligent computing experiences to people and organizations worldwide.”
    The Microsoft AI and Research Group is hiring for positions in its labs and offices worldwide. More information can be found at https://www.microsoft.com/en-us/research/careers/.
    Microsoft (Nasdaq “MSFT” @microsoft) is the leading platform and productivity company for the mobile-first, cloud-first world, and its mission is to empower every person and every organization on the planet to achieve more.
    Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.
    Read more at https://news.microsoft.com/2016/09/29/microsoft-expands-artificial-intelligence-ai-efforts-with-creation-of-new-microsoft-ai-and-research-group/#Jgl2ELuBVBqlIrCK.99

    Read more at https://news.microsoft.com/2016/09/29/microsoft-expands-artificial-intelligence-ai-efforts-with-creation-of-new-microsoft-ai-and-research-group/#Jgl2ELuBVBqlIrCK.99
    Contact Information:

    Microsoft Public Relations Contacts