Facebook working on voice assistant to rival Amazon’s Alexa

(Reuters) – Facebook Inc is working to develop a voice assistant to rival the likes of Amazon.com Inc’s Alexa, Apple Inc’s Siri and Alphabet Inc’s Google Assistant.

“We are working to develop voice and AI assistant technologies that may work across our family of AR/VR products including Portal, Oculus and future products,” a Facebook spokesperson told Reuters in an e-mailed response on Wednesday.

Earlier in the day, CNBC reported about the development, saying that the team behind the technology has been contacting vendors in the smart speaker supply chain.

However, it remains unclear how exactly Facebook envisions people using the assistant, but it could potentially be used on the company’s Portal video chat smart speakers, the Oculus headsets or other future projects, CNBC reported.

According to research firm eMarketer, Amazon’s Echo is expected to capture 63.3 percent of smart speaker users in 2019, while Google Home will account for 31 percent.

Reporting by Vibhuti Sharma in Bengaluru; Editing by James Emmanuel

Skype is Dropping Cortana Bot While Promoting Alexa

Cnet reported last week that Skype is discontinuing its integrated Cortana bot on April 30th, 2019, and is now promoting Amazon Alexa voice assistant integration. OnMSFT is reporting that Skype users have been seeing promotional offerings of 200 free minutes when they link their Skype and Amazon accounts. Microsoft and Amazon announced Skype calling on Alexa in November 2018, at the time of that announcement they were also offering 200 minutes of free calls once user’s Skype and Alexa accounts had been linked. Even though this promotional offer may not be new, it is significant that it is being more heavily promoted now that the Cortana bot will be leaving soon. Twitter user Florian Beaubois took screenshots of the Skype notifications to users:


A Microsoft spokesperson told Cnet that the Alexa promotion and Cortana’s discontinuation are unrelated. When the Cortana chat bot leaves at the end of April, Skype will still offer Cortana suggestions which can be personalized to a user or chat when given permission. Cortana suggestions are like smart replies, they appear in a chat conversation as emoticons, GIFs, weather, movies, or restaurants. A user can select the suggestion to include it in a reply. The Microsoft spokesperson said in an email,

We are constantly evaluating and testing our products to ensure they offer the best and most productive experiences. We are discontinuing the Cortana bot within Skype, but Cortana suggestions are still available.

Microsoft announced plans to integrate Cortana into Skype as a bot at Microsoft Build 2016. The integration allowed users to order food, book trips, transcribe video messages and make calendar appointments through Cortana. In December 2016, Microsoft announced it’s own smart speaker, INVOKE, in collaboration with Harman Kardon, featuring the ability for users to make and receive calls on Skype with Cortana.


At first glance, it seems pretty odd that Microsoft is promoting an Alexa integration with Skype rather than its in-house Cortana. However, the decision does make sense when considering Cortana’s enterprise focus. Late January 2019 Business Insider reported that Microsoft admitted it can’t beat Amazon and Google in the voice assistant war. Commentsfrom Microsoft’s CEO Satya Nadella to The Verge confirmed Microsoft’s plans to evolve Cortana into an app or service that will work with other voice assistant platforms. So the move could just be a part of Microsoft’s plans to scale back Cortana and focus on its enterprise use. Nadella said,

Cortana needs to be that skill for anybody who’s a Microsoft 365 subscriber.

Cortana already has deep integration with the Office Suite, allowing Alexa users to leverage Cortana to schedule meetings, access calendars, and set reminders. Business Insider also reports that Cortana is the most popular voice assistant among enterprises in North America and Europe with nearly half, 49%, of companies in these regions that use a voice assistant using Cortana. In addition, another 13% of enterprises in North America and Europe plan to implement Cortana in 2019 according to data from the Business Insider Intelligence’s New Tech Survey.

Microsoft’s decision to retire the Cortana chat-bot in Skype and promote Alexa integration with Skype is a move that could prove useful in the long run by allowing Microsoft to enhance itself as an open, cross-platform ecosystem. Voicebot’s Bret Kinsella wrote about Cortana’s enterprise focus in 2016 in VentureBeatin 2017 on Voicebot.ai, and again in January 2019 on Voicebot.ai.

Microsoft Should Go All-in with Cortana for B2B Voice Applications

Amazon’s Alexa for Business

Amazon’s Alexa for Business Blueprints lets employees make custom voice apps

Use Alexa in your place of work? There’s cause for minor celebration: Today marks the launch of Alexa for Business Blueprints, a set of dozens of preconfigured templates for Amazon’s intelligent assistant that let customers create and publish private skills without having to write code. They’re currently available in the U.S., with additional territories presumably on the way.

As Amazon product marketing manager Ben Grossman explains in a blog post, Alexa for Business Blueprints stay within workplaces — they can’t be used on devices outside of an organization — and any employee can use them to submit voice app requests. How? Simply by signing in with an Amazon account, filling in the requests and responses fields, and supplying an Alexa for Business organization identifier (or ARN). It’s up to IT administrators to review and selectively enable apps for rooms (or the entire organization) as they come in.

A few of the available Blueprints address work-related questions like “What’s the guest Wi-Fi password?”, “What are the hours for IT?”, and “When does open enrollment start?” Others cover pertinent subjects like office layout (“Alexa, ask Team Guide, where’s the mailing center?”) and equipment setup (“How do I set up corporate email on my phone?”).

Coinciding with the rollout of Alexa for Business Blueprints, two new Skills Blueprints have been added to the Alexa Skills Blueprints website under a new Business category: a Business Q&A Blueprint for custom questions and answers, and an Onboard Guide Blueprint to handle new hire questions.

Amazon says that already, Amazon for Business users like Glidewell Dental plan to use the Blueprints to keep staff updated with company information and events, and that Saint Louis University and Emerson University hope to drive student and faculty engagement and support curriculum. “[S]ome companies attempting to develop custom Alexa skills found they didn’t have the resources to build their own private skills, which can take several months to design and release to their organization,” Grossman wrote.

Alexa for Business made its debut two years ago at Amazon Web Services’ re:Invent conference. It allows enterprise users to schedule meeting rooms (with third-party services such as Cisco, Polycom, Zoom), share itinerary information (through Concur), check voicemail messages, and quickly see if meeting rooms are available, and it enables company admins to control users’ settings and capabilities through a web dashboard. Beyond traditional office environments, Amazon has pitched it as a way to manage Echo speakers in places like hotels and dorm rooms.

Today’s launch comes months after Amazon introduced personal Blueprints for personal Alexa skills, which are assigned to Alexa-enabled devices registered to a specific Amazon account. Like voice apps created with Alexa for Business Blueprints, they don’t appear among the tens of thousands of third-party skills in the Alexa Skills Store.

Home Mini bundle for $129 comes with a $50 Hulu gift card

This discounted Google Home Hub + Google Home Mini bundle for $129 comes with a $50 Hulu gift card

Control smart home devices, rock out to your favorite tunes, and more.

Every so often, we’ll see a stellar deal on the Google Home Hub that makes picking one up super tempting, and it looks like such a deal is now live at BuyDig. While supplies last, you can pick up a Google Home Hub and Google Home Mini bundle with a $50 Hulu gift card for only $129 when you enter promo code ABN13 during checkout. Just the Google Home Hub alone would normally cost you $149, but with today’s deal you not only save $20 on its cost but also pick up the $49 Google Home Mini and a $50 Hulu gift card with it at no additional charge. In total, this package should cost closer to $250. You can choose between Chalk or Charcoal colored speakers with this offer, and shipping is free.

Google Home Hub is the only first-party Google home device with a screen, allowing you to see as well as hear information you request. It has a 7-inch touchscreen display, two far-field mics, and an ambient light sensor to ensure the display color and brightness fit in with its surroundings. The screen makes it more useful for visual tasks like checking your calendar, following along with recipes in the kitchen, watching YouTube, seeing the weather forecast, and more. We reviewed the Hub on release, praising its display, build quality and smart home management tools.


How did Google convince phone makers to add a Google Assistant button?

Voice assistants like Siri, Bixby and Google Assistant have been steadily integrated into our tech, eager to help us with our everyday tasks and answering our trivial questions. They can now be found on watches, TVs, speakers and, of course, their birthplace: smartphones.

It’s always been relatively easy to activate the voice assistant on your smartphone: with a shortcut on your home screen, a long push of the home button or just saying their respective call phrase. But it seems that companies spending millions developing the AI helpers feel that’s still not convenient enough.

Samsung went for a dedicated button for its voice assistant, Bixby, straight from its introduction in 2017. The move was controversial as users were complaining about accidental presses and felt like Samsung is forcefully trying to make them use Bixby since the button wasn’t customizable. And while Samsung finally did allow one of the button’s functions to be remapped a few weeks ago, its convenience is still questionable.

It appears that Google was observing and taking notes about the pros and cons of the dedicated assistant button. And at the bottom of that list, it says: “We should have one for sure!” And now we do.

The rise of the Google Assistant button

If until now a dedicated Google Assistant button was a novelty, soon its presence will be almost as common as Android itself. About a month ago, Google announced that it’s partnering with several smartphone manufacturers to bring the Google Assistant button to more devices than ever.

The lineup is pretty strong: LG, Nokia, Xiaomi, Vivo and TCL (maker of Alcatel phones) have all agreed to add the button to their smartphones. LG (which actually had one on the G7) will have one on the G8 ThinQ, Nokia has announced the 4.2 and 3.2 that will both have it and Xiaomi’s latest flagship, the Mi 9, has one as well.

We know Samsung is missing from the party because of Bixby, but where’s Huawei? Well, rumor has it that China’s biggest phone maker is working on its own voice assistant to rival Google’s, which might be the reason why it’s not part of the merry company.
Still, thanks to the strong support coming from these companies, Google is estimating that about 100 million smartphones with a Google Assistant button will be shipped in 2019, a staggering number of devices.

So, how come so many phone makers are suddenly okay with adding a feature that’s purely benefiting another company? Well, let’s explore some theories!

Theory 1: Google is subsidizing phones that have the Google Assistant button

So far there’s been no information regarding any financial aspects of the deals made between Google and the phone manufacturers. However, considering the questionable advantages of the feature, it’s not far fetched to assume that Google is offering a little something to convince the companies to take that step.

That won’t be precedent either, Google is paying Apple billions of dollars to have its search engine as the default one in Safari and hundreds of millions to Mozilla for the same privilege on Firefox. So why not offer a small financial boost to partnered brands and in return get a deeper integration of its voice assistant, which in terms leads to more traffic to the search engine?

If we have to look for something common between the manufacturers that Google has partnered with, it’s that they’re all in a race to gain (or regain in LG’s case) market share. And a good way to do that is by offering more affordable devices. The LG G8 is rumored to cost around $750, which is a surprisingly low price for an LG flagship in 2019. Should we thank Google for that? Perhaps. We don’t have access to Google’s cost-benefit analysis for this project, but there’s definitely a monetary gain from having the dedicated hardware button. What part of the profit has Google agreed to share with the companies in question we’ll likely never know. Maybe the answer is “no part”, in which case…

Theory 2: Google is using its position as the Android developer to convince manufacturers

There’s positive reinforcement and there’s also negative reinforcement. If Google didn’t go with “we give you something and you give us something in return” then maybe it decided to use another tactic, for example: “if you don’t give us something, we might not give you something”. Again, we’re in the realm of speculation. But just as it’s not hard to imagine Google paying manufacturers for the addition of the Assistant button, it’s also plausible that it might have implied that adding it will definitely be a good choice *wink wink*.

Surely Google has plenty of leverage when it comes to Android. While Android is an open-source operating system, meaning it’s free to use, the Google apps people expect to have on their phones (Google Maps, Gmail, Play Store) are all licensed from Google. If you’re a phone maker without a license, your devices are basically unusable, unless you have viable alternatives for those products (which is the case in China). There are numerous other benefits of being on Google’s good side. Maybe Google pointing them out followed by “we also think the Google Assistant button would be a great addition to any smartphone” was enough for the manufacturers to get the hint.

This scenario is less likely as companies would have found a way to go public with Google’s shenanigans which of course would harm its image. Still, it’s one of the few theories that explain the sudden love for the Google Assistant button. But of course, there’s also a third option!

Theory 3: We’re all wrong and dedicated assistant buttons are actually awesome

As unbelievable as that sounds, we must consider all the possibilities! Sure, we hear a lot of complaints about the Bixby button getting in the way, and things probably aren’t much different when it comes to the GA button as well. However, this might be one of those cases where the vocal minority is twisting the public perception about an issue. What if for every person that dislikes the feature there are ten that like it?

Of course, we don’t have the data, but Google must have spent a lot of time and resources researching and analyzing the concept before it decided to go through with the mass integration. And when you think about, if the GA button does find its way into the hands of 100 million people this year, statistically speaking, even accidental presses are bound to make some people start using Google Assistant more often. After all, while for people following tech voice assistants have been a given for years, many users are still unfamiliar with the concept and don’t even know they have it on their phones.

Moreover, having the voice assistant complete tasks for you does feel in a way like you’re living in the future. Maybe with wider adoption, people will stop feeling so awkward talking to their phones and soon we’ll see the button as a convenience rather than an annoyance.

The dedicated button gives the voice assistant a physical representation, it makes it a part of the phone you can see and interact with, making it an integral part of your device. Now, with Google in the game as well, it’s also becoming a way to mark your territory. Bixby says “This phone is a Samsung first and an Android phone second!” while GA says “This phone is all about Android!”

It will be interesting to see where this rivalry will take us and if Huawei, now the world’s second largest manufacturer, will enter the scene with a capable voice assistant of its own. I’m guessing if that happens, it will come with a hardware button right off the bat.

Apple Acquires Startup Laserlike To Boost Siri: Report

Apple has acquired a startup specializing in machine-learning technologies as part of the ramp up of its artificial intelligence group led by a former Google executive, according to a report.

The startup, Laserlike, was founded by three ex-Google engineers and will join the AI group headed by Google’s former AI chief, John Giannandrea, The Information reported Wednesday.

[Related: New Apple Smart Home Business Leader Jadallah Has Channel Cred]

Apple, which did not immediately respond to a request for comment, reportedly confirmed the acquisition to The Information.

The acquisition could give a boost to Apple’s efforts at developing its voice-controlled assistant Siri, which has lagged behind Amazon’s Alexa and the Google Assistant on AI capabilities despite having a significant headstart.

Giannandrea was brought on in April 2018 to head up Apple’s machine learning and AI strategy, including Siri. He has since carried out a “significant overhaul” of the Siri team, The Information reported.

Founded in 2015, Mountain View, Calif.-based Laserlike had raised $24.1 million in funding, according to funding database Crunchbase.

The startup’s technology has focused on providing users with “high quality information and diverse perspectives on any topic from the entire web,” Laserlike said on its website.

The company has developed a “web scale” platform for content search, discovery and personalization using machine-learning technologies, the Laserlike website says.

YouTube Music comes to Google Home in many more countries (Europe, Japan, Canada)

As the story goes, Google isn’t particularly good at making its own services and apps work together. Take YouTube Music for example. It took months for the streaming service to be available on Android Auto and as an alarm provider in the Clock app, and it still isn’t integrated in Google Maps the way Spotify and Play Music are. But things are ever-so-slowly improving. You can (now) pick YouTube Music to be your music provider on Google Assistant and Home speakers in many, many more countries.

Previously, only users in the US, UK, Australia, and Mexico saw the option of YouTube Music in their Assistant settings. But almost all countries where both Google Home and YouTube Music are officially available have it now:

  • Canada
  • Denmark
  • France
  • Germany
  • Italy
  • Japan
  • Netherlands
  • Norway
  • Spain
  • Sweden

South Korea still only has access to YouTube Premium. And both India and Singapore don’t have YouTube Music yet, so it’s understandable the music service isn’t accessible on Google Home.

Full story: https://www.androidpolice.com/2019/03/08/youtube-music-comes-to-google-home-in-many-more-countries-europe-japan-canada/

Google Assistant introduces radio – listen to radio on Google Home

With Google Home, you can use your voice to listen to terrestrial or internet/satellite radio stations through Google Home or a TV or speaker with Chromecast built-in. Use your voice to listen to radio. Here are some ways to talk with your Google Assistant on Google Home when choosing and listening to terrestrial and internet/satellite radio.



Ask Google Assistant to translate your conversation with someone who doesn’t speak your language

Translate conversations with interpreter mode – in real time! You can ask the Google Assistant to translate your conversation with someone who doesn’t speak your language. For now, you must use English, French, German, Italian, Japanese, or Spanish to start using interpreter mode. After you’ve started interpreter mode, you can ask the Google Assistant to translate between more languages.

How to Translate a conversation

  1. Say “Ok Google.”
  2. Say a command, like:
    • Be my Italian interpreter.
    • Help me speak Spanish.
    • Interpret from Polish to Dutch.
    • Chinese interpreter.
    • Turn on interpreter mode.
  3. If you haven’t identified languages, choose which languages you want to use.
  4. When you hear the tone, start speaking in either language. You don’t have to alternate between languages for interpreter mode to work.

On a Smart Display, you’ll both see and hear the translated conversation.

To stop using interpreter mode, say a command like:

  • Stop.
  • Quit.
  • Exit.

On a Smart Display, you can also swipe from left to right to stop interpreter mode.

Devices you can use

  • All Google Home speakers
  • Some speakers with Google Assistant built-in
  • All Smart Displays

Languages you can translate between

You can ask the Google Assistant to translate into any of the following languages.

  • Czech
  • Danish
  • Dutch
  • English
  • Finnish
  • French
  • German
  • Greek
  • Hindi
  • Hungarian
  • Indonesian
  • Italian
  • Japanese
  • Korean
  • Mandarin
  • Polish
  • Portuguese
  • Romanian
  • Russian
  • Slovak
  • Spanish
  • Swedish
  • Thai
  • Turkish
  • Ukrainian
  • Vietnamese

Guardian news to start your day, adapted for Google Assistant

We have generated an audio news summary by blending human and synthetic voices

The smart speaker is reinvigorating the news summary format common in broadcast with quick headlines and context consumable in a compact package. We at the Lab are experimenting with synthesising such a bulletin, designed for Google Assistant and based on existing Guardian journalism and curation.

You can check it out on Google Home devices or through the Google Assistant on your smartphone by saying, “Hey Google, talk to the Guardian Briefing.”

Although we make in-depth podcasts, the Guardian does not produce anything appropriate for this format in audio. We do, however, create visual, predominantly text-based packages in the form of newsletters and morning briefings. The Lab is attempting to create this well-understood audio summary content by blending human and synthetic voices, harnessing the power of text-to-speech technology on the Assistant platform.

Smart speakers create new habits around old formats

The news bulletin is almost as old as radio itself. However, research shows smart speakers are rejuvenating this format. According to a 2018 Voicebot.ai study, 63% of smart speaker owners use the device daily and 34% have multiple interactions per day, creating new habits.

When users incorporate news into their daily lives, they are often looking for specific lengths of content. More than half of smart speaker owners want the latest news on a regular basis, according to the Smart audio report by NPR and Edison Research, but many would prefer shorter formats.

Demand for short up-to-date bulletins is highest at the start and end of the day. The Future of voice report by Reuters suggests the majority of news usage is in the mornings, where fresh routines are emerging, and last thing at night. Regular listeners of news updates say they like the brevity, the control and the focus.

Generating an audio bulletin

While traditional broadcasters have a range of products ready to plug into these slots, we had to think about how we might create a suitable package of content. There are services available to help adapt content for smart speakers, and of course it’s also possible to have someone record a regular update. While we think these are viable options, we were excited by the possibility of creating a new audio product for the Guardian through automation by combining rich audio and text-to-speech technology.

We began by looking at Morning Briefing content to leverage our journalism and curation, rather than simply grabbing the headlines. Through daily iteration, the team crafted a set of rules to structure that content by combing headlines with supporting text. This newly structured data was then inserted into an SSML template alongside rich audio and blended with music.

Tuning the template was an exercise in sound design, as our editorial lead tweaked the speed, prosody, and experimented with variations of the Assistant’s voice, including the new Wavenet-based options.

While our research shows prolonged interactions with a synthetic voice is taxing and less pleasant than listening to a human voice, harmonising them creates a more congenial aesthetic. The Guardian Briefing attempts to utilise the best of rich audio and text-to-speech. Relying on automation and the synthetic voice has drawbacks in terms of quality control and aesthetics. Yet we were impressed by the speed and flexibility of the text-to-speech approach.

Measuring success

As this product is about filling slots created by new habits, we will be using retention as our key metric. How often do users come back? Do they continue to follow the Briefing over time?

In the coming weeks, we will be examining this data. We will also try to improve the content of the Briefing by adding localisation options for the US and Australia as well as exploring visual expressions through multimodal design.

Give it a try and let us know what you think.

Find out more about the Voice Lab’s mission or get in touch at voicelab@theguardian.com.