Adobe Expands Integrations With AI-Drive Account Profiling, LinkedIn Synching

Adobe has launched new capabilities within the Marketo Engage ABM Essentials offering, part of the Adobe Marketing Cloud, including new AI-powered models in its Account Profiling tool and an expanded LinkedIn partnership. The two updates aim to help accelerate the execution of ABM strategies for sales and marketing teams, as well as drive personalized and account-based experiences.

The Marketo Engage Account Profile feature, powered by AI, now includes predictive analytics, designed to suggest potential net-new accounts and help users better uncover new target accounts from more than 500 million data points within the platform.

Adobe Marketing Cloud has also expanded on its LinkedIn Matched Audiences integration, which aims to position users to sync lists of accounts into LinkedIn through Marketo Engage to create personalized ad targeting. The LinkedIn extension can also filter for different target account characteristics.

“Identifying the right target accounts and key decision makers shouldn’t deter marketers from launching or implementing an ABM strategy,” said Brian Glover, Director of Product Marketing of Marketo Engage at Adobe, in a statement to Demand Gen Report. “It shouldn’t be a guessing game or require weeks of manual work either. With the latest enhancements to Marketo Engage’s Account Profiling capability and our expanded LinkedIn integration, B2B marketers can identify their best-fit accounts in minutes out of a universe of 25 million companies and then target key decision makers at those accounts on LinkedIn.”

NLG in the newsroom: fast, consistent, and hyperlocal

NLG in the newsroom: fast, consistent, and hyperlocal

We’ve written about how Natural Language Generation can eliminate the bottleneck of manual, one-at-a-time analysis within business environments, producing data-driven insights that otherwise would remain fossilized in spreadsheets on the network drive. It is also worth noting that the same principles apply to the world of news reporting, where there is so much valuable data to consider that editors—when assigning stories based on limited human analytical capacity—are forced to leave many data sets completely unexamined. Editorial prioritization tends to mirror demand, leading to the omission of hyperlocal content that is highly useful, but to only a small subset of readers. A Natural Language Generation platform—particularly when it is open, extensible, smart, and secure (like ours!)—solves this problem.

This is not a theoretical statement, but one that is actively illustrated today by UK-based RADAR AI, and by BBC News Labs. Both organizations are using Arria’s NLG platform to publish high-quality stories that otherwise would simply never be written.

RADAR AI Hits a Milestone: 200,000 Stories

RADAR (an acronym for Reporters and Data and Robots) is a joint venture between data-journalism start-up Urbs Media and the Press Association, the UK national news agency. Just last week, after being open for business for only fourteen months, RADAR AI published its 200,000th story: “Crown court waiting times increase by more than seven months in Newcastle,” by Harriet Clugston, Data Reporter. (Nice job title. Notice how effectively it stakes out Clugston’s data-driven approach while also establishing her beat as essentially boundary-free. With a title like that, a journalist can use any data set as a starting point for investigation and explanation.)

Relatively few in the world are concerned about Crown Court waiting times in Newcastle, a city of approximately 300,000 as of 2018. In fact, we can guess that relatively few of the 300,000 citizens of Newcastle are concerned about the Crown Court waiting times. Generally speaking, our interest in this subject is proportional to the degree of our civic or personal interest in the municipality or the court. We’re mildly interested if we’re paying taxes to keep the system running, but probably not highly interested unless ours is one of the 455 cases waiting to be heard. For understandable, practical reasons, an editor probably would not have assigned a reporter to analyze how caseloads and waiting times have changed over the years, and then to write a story describing movements in the data. A subject of broader interest would win the day and the Crown Court story—which does in fact contain valuable insights for those who are interested in the topic—would never have appeared.

Fortunately the RADAR approach “breaks the ‘content compromise’ which forces organisations to choose between high quality, reliable and bespoke content or mass-produced superficial output.” Makes perfect sense. Does an editor really want Harriet Clugston to spend her time crunching and describing numbers? No, Arria’s NLG platform can do most of that for her, freeing her to perform a level of reporting that makes the hyperlocal piece read like a story of broader interest. In the brief, information-packed article, Clugston includes direct quotations from four individuals who are involved in the UK criminal justice system:

  • Stephanie Boyce, Deputy Vice President of the Law Society of England and Wales;
  • John Apter, Chair of the Police Federation of England and Wales;
  • Sara Glenn, Deputy Chief Constable for Hampshire Constabulary; and
  • Spokeswoman for HM Courts and Tribunals Service.

With NLG in an assisting role, Clugston has taken the opportunity to maximize the value of her story to the few citizens of Newcastle who are fretting about longer wait times at Crown Court.

Let’s do the math on those 200,000 stories that RADAR has published since opening for business in June of 2018. That is a rate of approximately 13,300 stories per month, 430 per day. Not bad for an organization that has only seven employees on Linkedin!

For comparison, by its own reckoning The New York Times—which employs approximately 1,300 staff writers—publishes roughly 200 “pieces of journalism” per day, with “pieces of journalism” likely including blog posts and interactive graphics, in addition to stories. Following is an excerpt from a 2017 internal report created by the “the 2020 group” of Times editors tasked with spending the prior year examining editorial policies and practices at the paper:

The Times publishes about 200 pieces of journalism every day. This number typically includes some of the best work published anywhere. It also includes too many stories that lack significant impact or audience—that do not help make The Times a valuable destination.”

A couple of paragraphs later, the report states the problem even more plainly: “We devote a large amount of resources to stories that relatively few people read…. It wastes time—of reporters, backfielders, copy editors, photo editors, and others—and dilutes our report.”

It would appear that RADAR has found a way to address these concerns. A story about Crown Court waiting times probably lacks a significant audience, but does have a significant impact on a small audience. Especially if its appearance or delivery can be targeted to readers most likely to be interested, a hyperlocal story such as this one represents a step in the right direction rather than the dilution of “some of the best work published anywhere.”

The Newcastle story is available to us only as a screenshot, but here is another recent story from Harriet Clugston to which all of the observations above are applicable: NHS staff took almost 7,000 full-time days of sick leave because of drug or alcohol abuse last year, figures reveal.

BBC News Labs and SALCO Part 1

The BBC, too, facing heightened expectations for the frequency and quality of local news content, has commenced a ‘Semi-automated Local Content’ initiative, SALCO for short. BBC News Lab developers Roo Hutton and Tamsin Green began by developing pipeline that reported Accident and Emergency statistics on more than one hundred local hospitals—interesting information but, again, not the best use of top-notch journalists’ time. As Hutton explains in his excellent article from March of this year, “Stories by numbers: How BBC News is experimenting with semi-automated journalism,” “Automated journalism isn’t about replacing journalists or making them obsolete. It’s about giving them the power to tell a greater range of stories— whether they are directly publishing the stories we generate, or using them as the starting point to tell their own stories— while saving them the time otherwise needed to analyse the underlying data.”

Hutton describes a respectful, cooperative approach during which he and Green work closely with journalists in order to learn how they think, and also to help the journalists understand how Arria’s platform works, and why traditional writers should embrace NLG.

“This story has been generated using BBC election data and some automation”

The experiment was such a success that BBC News Labs decided to take the same approach to covering local elections in May of this year. Writing on the News Labs site a couple of weeks ago in, “Salco part 2: Automating our local elections coverage,” Tamsin Green explains the rationale, both in terms of workload volume and the need for consistency in coverage: “Local elections on BBC News Online are covered at both a national level, to aggregate results and highlight trends, as well as locally by journalists working out of regional hubs. With 248 councils up for election in England alone, that means a huge amount of journalism in a short period of time. We observed huge variation in election coverage across the country: Some councils were not covered. Some would simply take tweets from the @bbcelection Twitter feed. Others were there at the count, posting photographs and detailed results.”

This is a textbook case for NLG, and the sample output from BBC News Labs looks good. As you contemplate it, consider the variance in style and content that would naturally arise if reporters were left to configure the information themselves from municipality to municipality—and consider how long it would take to assemble even a hodgepodge of inconsistent reporting.

Twilio: Harnessing The Power Of AI (Artificial Intelligence)

Twilio, a provider of voice, video and messaging services, reported its second quarter results and they were certainly standout. Revenues spiked by 86% to $275 million and there was a profit of 2 cents a share.

On the earnings call, CEO Jeff Lawson noted: “We have the opportunity to change communications and customer engagement for decades to come.”

And yes, as should be no surprise, one of the drivers will be AI (Artificial Intelligence).  Just look at the company’s Autopilot offering (at today’s Signal conference, Twilio announced that the product is generally available to customers).  This is a system that allows for the development, training and deployment of intelligent bots, IVRs and Alexa apps.

Now it’s true that there is plenty of hype with AI. Let’s face it, many companies are just using it as marketing spiel to gin up interest and excitement.

Yet Autopilot is the real deal. “The advantage that’s unique to Twilio’s API platform model is that we build these tools in response to seeing hot spots of demand and real need from our customers,” said Nico Acosta, who is the Director of Product & Engineering for Twilio’s Autopilot & Machine Learning Platform. “We have over 160 thousand customers of every size across a huge breadth of industries and we talk to them about the solutions they need to improve communication with their customers. What do they keep building over and over? What do they actively not want to build because it’s too heavy a lift? Those conversations inform the products we create that ultimately help them differentiate themselves through a better customer experience.”

AI Innovation

Consider that Autopilot breaks the conventional wisdom that there is an inherent tradeoff between operational efficiency and customer experience.  To do this, Twilio has been focusing on pushing innovation with AI, such as with:

  • Classification: This involves grouping utterances and mapping them to the correct task. With AI, the system gets smarter and smarter.
  • Entity Extraction: This uses NLP (Natural Language Processing) to locate details like time, place, cities, phone numbers and so on. This means it is easier to automate repetitive tasks like setting up appointments (if the customer says “7 at night,” the NLP will understand this).

YOU MAY ALSO LIKE

There are definitely some interesting use cases for Autopilot. One is with Green Dot, which is a next-generation online bank. A big challenge for the company is that its customers are often new to financial services. But with Autopilot, Green Dot has been able to develop a conversational agent that works on a 24/7 basis, on both the IVR and chatbot system in the mobile app. The company also gets a view of important metrics like usage, time spent and common questions about products.

Here are some other interesting use cases:

  • Insurance: Generate quotes automatically, file claims easily, and answer FAQs.
  • Hospitality: Offer virtual concierge services, answer FAQs, and manage reservations programmatically.
  • Real Estate: Field and generate leads, schedule appointments programmatically, and answer questions about listings.
  • Retail & Ecommerce: Allow customers to search products, take advantage of promotional offers, and check delivery status.

Keep in mind that changing a traditional IVR system can be complicated and time-consuming—much less having the ability for AI. But with Autopilot, the development is lightning fast. You can create bots with simple JSON syntax and deploy them on multiple channels with zero code changes. There are also easy-to-use tools for training the AI models.

Takeaways With Autopilot

With the development of Autopilot, there were some important learnings for AI. “No single model can handle the many different use cases,” said Acosta. “Because of this, we created a multi-model architecture that adjusts in real time. For example, if there is large amounts of data, a deep learning algorithm might be best. But if not, a more traditional model could be better.”

Regardless of the technical details, Autopilot does point to the incredible power of AI and how it is poised to upend the software market.

“The potential of AI to transform communications is huge, but there’s a big delta between that potential and how companies are actually using it at scale,” said Acosta.  “So at Twilio, we are focused on the building blocks that customers need to put the innovation to work.”

Tom (@ttaulli) is the author of the book, Artificial Intelligence Basics: A Non-Technical Introduction.

Artificial Intelligence Search, NLP & Automation

Another significant requirement is the need to find an efficient method for reducing the amount of computational searching for a match or a solution. Considerable important work has been done on the problem of pruning a search space without affecting the result of the search. One technique is to compare the value of completing a particular branch versus another. Of course, the measurement of value is a problem. As real-time applications become more important, search methods must become even more efficient in order for an Al system to run in real-time.

Natural Language Processing

There has been an increasing amount of work on the problem of language understanding. Early work was focused on direct pattern matching in speech recognition and parsing sentences in processing written language. More recently, there has been more use of knowledge about the structure of language and speech to reduce the computational requirements and improve the accuracy of the results. There are several systems that can recognize as many as several thousand words, enough for a fairly extensive command set in a “hands busy” application, but not enough for business text entry (dictation to text).

A number of production natural language command systems are capable of understanding structured English commands. These systems are context-sensitive and require that a situation-specific grammar and vocabulary be created for each application.

Automatic Programming

An obvious application for Al technology is in the development of software without some of the more tedious aspects of coding. There has been some research on various aspects of software program development. Arthur D. Little, Inc. has developed a structured English to LISP compilation system for a client and an equivalent commercial system has recently been announced.

It should soon be possible to build a Programmer’s Assistant that will assist in the more routine aspects of code development although no development has apparently been completed beyond a system that assists in the training of ADA programmers and a prototype system that converts a logical diagram to LISP (Reasoning Systems). True automatic programming that will relieve a programmer of the responsibility for logical design seems to be some time in the future.

But ‘Automation’ in AI from self-learning systems to systems that are designed to assist or automate certain tasks have come under fire for their potential to be flawed.  It is very difficult for the flawed or biased human mind to create and automate a system, process or task without the possibility of including some of our inherent flaws and biases.  Thus, it has been challenged recently that these AI solutions being developed for automation may be including flawed logic elements.

This has been proved, to a degree, by some of the Microsoft attempts to automate chatbot learning by exposing it to content on the internet or at least content within Twitter.  The system very quickly picks up on biased views and can quickly become very politically incorrect in a short period of time.

Amazon’s Alexa as a presentation layer over a Tableau dashboard integrated with Arria’s NLG technology

A Big Day for Arria at VOICE Summit 2019
Greg Williams

I write this dispatch while sitting with Arria teammates outside of the crowded meeting space where Arria’s COO, Jay DeWalt, and Chief Scientist, Ehud Reiter, are unveiling the breakthrough use of Amazon’s Alexa as a presentation layer over a Tableau dashboard integrated with Arria’s NLG technology. We’re happy to listen through the open door, giving up our seats to VOICE Summit attendees, many of whom we met at the Arria booth this morning. It’s great to see interest in Natural Language Generation at such high levels. NLG occupies a unique and absolutely essential layer of the technology stack that will make possible dynamic, multi-turn conversations with machines.

Jay and Ehud are joined by Kapila Ponnamperuma, Arria’s Head of Technology Integrations, and we know that Kapila is going to demonstrate the technology live in a few minutes. Since we’ve seen the demo, we know what the audience is in for. BI dashboards plus NLG are already impressive enough. Just wait until the attendees witness Kapila asking questions related to sales performance, and Alexa responding intelligently, remembering context to support follow-on questions. . . .

In the presentation leading up to the demo, Jay and Ehud make the point that data comprehension is more difficult when looking at raw data than at visuals, and more difficult when looking at visuals alone than at visuals combined with narrative written in natural human language. Hence the rapid pace of Arria’s BI dashboard integrations.

By combining an Arria-integrated BI dashboard with Alexa, or other conversational platforms, Arria takes it one step further: facilitating dynamic conversational AI for business.

Arria at the VOICE SummitArria stands out at VOICE Summit in being one of the few exhibitors primarily interested in business applications rather than consumer applications. (We also happen to be wearing fluorescent neon orange golf shirts, selected by SVP of Strategic Partnerships and Business Development, Lyndsee Manna, so we’re easy to spot.) Arria is dedicated to using the power of language to helping businesses achieve greater efficiency, discover deeper insights in their data, and ultimately make better, smarter decisions than their competitors.

Update—A Few Minutes Later

It was an extraordinary demo, extraordinarily well-received. Kapila quizzed Alexa about sales performance across multiple measures and dimensions. Conversationally, without a mouse, he achieved the equivalent of drilling down into a BI dashboard, and the audience heard Arria respond immediately with actionable information.

In the discussion that followed, Kapila made the point that if you have an existing BI Dashboard, you can be up and running in hours, with a sophisticated multi-turn conversation application that remembers context to facilitate follow-on questions.

The audience and the Arria presenters were so engaged with one another that an administrator had to call time in order to clear the room for the next session. Jay offered to continue the session in the hallway. Clusters of attendees kept him and Ehud busy fielding questions for another half an hour.

Tomorrow we’ll check in after Cathy Herbert delivers her VED Talk in the afternoon. (“TED Talk,” but with a V for VOICE.) Cathy will provide guidance on how companies can position themselves to take advantage of the forthcoming avalanche of improvements in Natural Language Generation, particularly when the NLG platform offers built-in computational and linguistic functions and is paired with a conversation platform such as Alexa.

Until then, signing off.

Analytics Are Defining The Future Of Digital Advertising

Advertisers are planning to increase the average number of integrated data sources from 5.4 today to 6.2 in 2019, searching for new customer and advertising effectiveness insights. The Salesforce study found that customer relationship management (CRM), online, and demographic data are the three most common data sources integrated as part of broader marketing technology stacks. As advertisers and marketers focus on increasing Marketing Qualified Leads (MQLs) and Sales Qualified Leads (SQLs), marketing automation system-based data is becoming a must-have for tracking advertising effectiveness.
91% of advertisers have or plan to adopt a data management platform (DMP) in the next fiscal year.
uncaptioned
THINKSTOCKPHOTO

These and other insights are from Salesforce’s Digital Advertising 2020 Report published today (18 pp., PDF, no opt-in). The study is based on a global survey of 900 advertising leaders across North America, Europe, and the Asia-Pacific region. The study illustrates new priorities, strategies, and tactics that signify the accelerating growth of data-driven advertising.  Please see the report for additional details regarding respondent demographics and the methodology.

Key takeaways from the study include the following:

  • Advertisers are planning to increase the average number of integrated data sources from 5.4 today to 6.2 in 2019, searching for new customer and advertising effectiveness insights. The Salesforce study found that customer relationship management (CRM), online, and demographic data are the three most common data sources integrated as part of broader marketing technology stacks. As advertisers and marketers focus on increasing Marketing Qualified Leads (MQLs) and Sales Qualified Leads (SQLs), marketing automation system-based data is becoming a must-have for tracking advertising effectiveness.
uncaptioned
SOURCE: SALESFORCE’S DIGITAL ADVERTISING 2020 REPORT
  • 47% of advertisers in North America will increase their use of third-party data in the next year, the largest increase among regions. Advertisers are looking to establish 2nd and 3rd party partnerships to tap into data sources they don’t own to gain more inputs into decision-making, ad targeting, and ad effectiveness.  Over the next two years, advertisers’ use of second- and third-party data will grow by 26% and 30%, respectively.
uncaptioned
SOURCE: SALESFORCE’S DIGITAL ADVERTISING 2020 REPORT
  • 91% of advertisers have or plan to adopt a data management platform in the next fiscal year. As brands rely on multiple data sources to target audiences, they’re increasingly turning to data management platforms (DMPs) to import that data, find segments to target and send instructions to networks and By 2019 an advertiser’s ability to integrate diverse databases and gain insights faster than competitors will have a significant impact on sales growth. The Salesforce study found data management platforms are at a tipping point. While just 20% of companies have been using a DMP for more than three years, an additional 21% are either currently implementing a DMP or have done so in the past year.
uncaptioned
SOURCE: SALESFORCE’S DIGITAL ADVERTISING 2020 REPORT

Healthcare Virtual Assistants Market Expected to reach $3,133.6 million by 2026 growing at a CAGR of 32.7%

Global Healthcare Virtual Assistants Market is accounted for $245.7 million in 2017 and is expected to reach $3,133.6 million by 2026 growing at a CAGR of 32.7% during the forecast period. Some of the key factors influencing the market growth include increasing demand for healthcare applications, riding prevalence of chronic disorders and growing demand for quality healthcare delivery. However, need of prepared data in the healthcare industry is restraining the market growth.

Healthcare Virtual Assistants are used in the healthcare industry in organize to raise patient date to a higher standard. Patient engagement is an approach used in order to improve health outcomes and better enduring care at lower costs. The facilitates healthcare organization to collect demographic in order, insurance details, and tolerant health history, finance/costing, procurement details, data mining and more analysis of all the records.

Get Sample Copy at https://www.reportsweb.com/inquiry&RW00012793380/sample

Amongst Product, Smart Speakers segment is expected to remain attractive during the forecast period owing to increase consumer preference for technologically advanced products. Smart speakers are multifunctional, fast, wide-ranging, and consistent solutions. By Geography, Asia Pacific is expected to grow at the largest market share during the forecast period. Increasing geriatrics population, high diffusion of smartphones, technological advancements, the growing use of isolated monitoring devices, and growing healthcare costs.

Some of the key players in Verint Systems Inc., Infermedica , Sensely, Inc., Microsoft Corporation, CSS Corporation, Egain Corporation, Kognito Solutions, LLC, Healthtap, Inc., Babylon Healthcare Services Limited, ADA Digital Health, Amazon, Nuance Communications, Inc., True Image Interactive, Inc., Datalog.AI and Medrespond LLC.

User Interfaces Covered:
– Text-To-Speech
– Text-Based
– Automatic Speech Recognition
– Other User Interfaces

Products Covered:
– Chatbots
– Smart Speakers

What our report offers:
– Market share assessments for the regional and country level segments
– Strategic recommendations for the new entrants
– Market forecasts for a minimum of 9 years of all the mentioned segments, sub segments and the regional markets
– Market Trends (Drivers, Constraints, Opportunities, Threats, Challenges, Investment Opportunities, and recommendations)
– Strategic analysis: Drivers and Constraints, Product/Technology Analysis, Porter’s five forces analysis, SWOT analysis etc.
– Strategic recommendations in key business segments based on the market estimations
– Competitive landscaping mapping the key common trends
– Company profiling with detailed strategies, financials, and recent developments
– Supply chain trends mapping the latest technological advancements

Make an Inquiry https://www.reportsweb.com/inquiry&RW00012793380/buying

Table of Contents:

1 Executive Summary

2 Preface

3 Market Trend Analysis

4 Porters Five Force Analysis

5 Global Healthcare Virtual Assistants Market, By User Interface

6 Global Healthcare Virtual Assistants Market, By Product

7 Global Healthcare Virtual Assistants Market, By Application

8 Global Healthcare Virtual Assistants Market, By End User

9 Global Healthcare Virtual Assistants Market, By Geography

10 Key Developments

11 Company Profiling
11.1 Verint Systems Inc.
11.2 Infermedica
11.3 Sensely, Inc.
11.4 Microsoft Corporation
11.5 CSS Corporation
11.6 Egain Corporation
11.7 Kognito Solutions, LLC
11.8 Healthtap, Inc.
11.9 Babylon Healthcare Services Limited
11.10 ADA Digital Health
11.11 Amazon
11.12 Nuance Communications, Inc.
11.13 True Image Interactive, Inc.
11.14 Datalog.AI
11.15 Medrespond LLC.

For More Information about This Report: https://www.reportsweb.com/reports/healthcare-virtual-assistants-global-market-outlook-2017-2026

Contact Us:

Call: +1-646-491-9876
Email: sales@reportsweb.com

Amazon brings Alexa announcements to Fire TV and completes YouTube rollout

Amazon today announced that its Alexa announcements feature is launching on all Fire TV devices in countries where the intercom-like functionality is available. It’s useful for getting a message to everyone at once — “dinner’s ready,” “the movie is starting,” etc.

If someone in your home or apartment makes an announcement through an Echo speaker or another Alexa-compatible device, you’ll see a notification on-screen. Whatever you’re watching or listening to on the Fire TV will temporarily pause, and the message will be played. Once the announcement is done, your content will resume automatically. You can also record your own announcements with the Fire TV’s Alexa voice remote. (If you don’t want TV time to be interrupted, announcements can be disabled for your Fire TV.)

Additionally, after making its long-awaited return last month, Amazon says the YouTube app is expanding to all Fire TV devices. So no matter which one of the company’s streaming gadgets you use, you’ll once again have access to a native, official YouTube app. Today, it’s coming to the original Fire TV Stick, Fire TV, and second- and third-generation Fire TV devices.

Google makes a subtle but important change to the Search app

A look at the updated version of the Google Search app reveals that Google is phasing out its Voice Search feature in favor of the Google Assistant. Voice Search could be found in the Google Search app and on the app’s widget. The iconic microphone icon, when tapped, would allow the user to verbally request a search. But now, Google wants its virtual digital assistant to take over all voice searches.

Spotted first by 9to5Google, a subtle change can be found in the Google Search app (aka the Google app) in version 10.24 or 10.28 beta. The microphone icon once found on the right-hand side of the search bar has been replaced with the Google Assistant icon. In addition, the old search bar included the words “Say Hey Google” while the updated version now reads “Ask any question.”
Google Assistant is at the heart of the Google ecosystem. It is considered to be the best digital assistant among the top four, a group that also includes Amazon’s Alexa, Apple’s Siri and Microsoft’s Cortana. According to Google, there are now over 1 million actions that Google Assistant can handle. And with Assistant available on all of the Google Home smart speakers and the Nest Hub smart displays, replacing Voice Search with Google Assistant is a way for the company to bring all of its apps and devices in sync with each other.

Apple had a head start with Siri, but Google Assistant has surpassed Apple’s digital helper

Google hasn’t finished rolling this update out to everyone yet, but it is easy to check to see if it has hit your phone. Simply open the Google Search app and check out which icon is in the search bar.
The first virtual assistant found on a smartphone was Siri, which debuted with the Apple iPhone 4s on October 4, 2011. Apple had acquired the technology when it purchased Siri Inc. in 2010. At the time, Siri was a concierge app in the Apple App Store and its developers planned to offer it for Android and BlackBerry devices. Even though Apple had a head start in this space, some of the company’s former employees said last year that Apple lost its vision for Siri the day after it was first introduced on the iPhone 4s; that was the day that Steve Jobs died.
Meanwhile, several tests have shown that Google Assistant recognizes questions and gives appropriate responses better than Siri. An extensive test was conducted last year of the four major digital assistants by venture capital firm Loup Ventures and its analyst Gene Munster. The test showed that Google Assistant answered correctly or performed the correct actions better than its rivals in all five categories that were being tested (Local, Commerce, Navigation, Information and Command). Siri finished last in three of the categories and only got 12% of navigation queries correctly. Alexa finished second in four of the categories while Cortana finished third in three of the classifications.
The aforementioned test was conducted on smart speakers. So a few months later, Loup put the assistants through the paces again, but this time it used different smartphones. Google Assistant on a Pixel XL understood all 800 of the questions asked. Siri misunderstood 11 queries while Alexa and Cortana missed 13 and 19, respectively. Google Assistant won four out of the five categories (Local, Commerce, Navigation and Information) while finishing second in one of them. Siri placed second in four categories, topping Google Assistant in Command. Alexa was all over the place with a second-place finish (Information), three third-place finishes (Local-tie, Commerce, Information, Command) and wound up last in one category (Navigation). Cortana was ranked last in three categories (Commerce, Information, Command), and finished third twice (Local-tie, Navigation).

Google Assistant has started to take charge of ‘Voice search’ in Google app

The ability to search the web with the voice came way earlier than digital assistants. And now with digital assistants, users are able to not only use their voice as a means to search the web but also get to use it by typing on the keyboard, thereby eliminating the need to have a standalone “voice search.”

Speaking of voice search, Google has spent a lot of resources in pushing Google Assistant, and as a result, it is probably the best in the market. Weirdly enough, Google Assistant never really took charge of the “voice search” in the Google app — until now.

Google has started to replace the voice search icon with the Google Assistant icon, meaning that all the queries are now going to be performed by Google Assistant. However, the UI hasn’t changed much.

The new feature is being rolled out in a phased manner, which means that you might not get to see the change right now. Note, users, who speak languages that Google Assistant does not recognize, won’t get to see the change.

via: 9to5google