How did Google convince phone makers to add a Google Assistant button?

Voice assistants like Siri, Bixby and Google Assistant have been steadily integrated into our tech, eager to help us with our everyday tasks and answering our trivial questions. They can now be found on watches, TVs, speakers and, of course, their birthplace: smartphones.

It’s always been relatively easy to activate the voice assistant on your smartphone: with a shortcut on your home screen, a long push of the home button or just saying their respective call phrase. But it seems that companies spending millions developing the AI helpers feel that’s still not convenient enough.

Samsung went for a dedicated button for its voice assistant, Bixby, straight from its introduction in 2017. The move was controversial as users were complaining about accidental presses and felt like Samsung is forcefully trying to make them use Bixby since the button wasn’t customizable. And while Samsung finally did allow one of the button’s functions to be remapped a few weeks ago, its convenience is still questionable.

It appears that Google was observing and taking notes about the pros and cons of the dedicated assistant button. And at the bottom of that list, it says: “We should have one for sure!” And now we do.

The rise of the Google Assistant button

If until now a dedicated Google Assistant button was a novelty, soon its presence will be almost as common as Android itself. About a month ago, Google announced that it’s partnering with several smartphone manufacturers to bring the Google Assistant button to more devices than ever.

The lineup is pretty strong: LG, Nokia, Xiaomi, Vivo and TCL (maker of Alcatel phones) have all agreed to add the button to their smartphones. LG (which actually had one on the G7) will have one on the G8 ThinQ, Nokia has announced the 4.2 and 3.2 that will both have it and Xiaomi’s latest flagship, the Mi 9, has one as well.

We know Samsung is missing from the party because of Bixby, but where’s Huawei? Well, rumor has it that China’s biggest phone maker is working on its own voice assistant to rival Google’s, which might be the reason why it’s not part of the merry company.
Still, thanks to the strong support coming from these companies, Google is estimating that about 100 million smartphones with a Google Assistant button will be shipped in 2019, a staggering number of devices.

So, how come so many phone makers are suddenly okay with adding a feature that’s purely benefiting another company? Well, let’s explore some theories!

Theory 1: Google is subsidizing phones that have the Google Assistant button

So far there’s been no information regarding any financial aspects of the deals made between Google and the phone manufacturers. However, considering the questionable advantages of the feature, it’s not far fetched to assume that Google is offering a little something to convince the companies to take that step.

That won’t be precedent either, Google is paying Apple billions of dollars to have its search engine as the default one in Safari and hundreds of millions to Mozilla for the same privilege on Firefox. So why not offer a small financial boost to partnered brands and in return get a deeper integration of its voice assistant, which in terms leads to more traffic to the search engine?

If we have to look for something common between the manufacturers that Google has partnered with, it’s that they’re all in a race to gain (or regain in LG’s case) market share. And a good way to do that is by offering more affordable devices. The LG G8 is rumored to cost around $750, which is a surprisingly low price for an LG flagship in 2019. Should we thank Google for that? Perhaps. We don’t have access to Google’s cost-benefit analysis for this project, but there’s definitely a monetary gain from having the dedicated hardware button. What part of the profit has Google agreed to share with the companies in question we’ll likely never know. Maybe the answer is “no part”, in which case…

Theory 2: Google is using its position as the Android developer to convince manufacturers

There’s positive reinforcement and there’s also negative reinforcement. If Google didn’t go with “we give you something and you give us something in return” then maybe it decided to use another tactic, for example: “if you don’t give us something, we might not give you something”. Again, we’re in the realm of speculation. But just as it’s not hard to imagine Google paying manufacturers for the addition of the Assistant button, it’s also plausible that it might have implied that adding it will definitely be a good choice *wink wink*.

Surely Google has plenty of leverage when it comes to Android. While Android is an open-source operating system, meaning it’s free to use, the Google apps people expect to have on their phones (Google Maps, Gmail, Play Store) are all licensed from Google. If you’re a phone maker without a license, your devices are basically unusable, unless you have viable alternatives for those products (which is the case in China). There are numerous other benefits of being on Google’s good side. Maybe Google pointing them out followed by “we also think the Google Assistant button would be a great addition to any smartphone” was enough for the manufacturers to get the hint.

This scenario is less likely as companies would have found a way to go public with Google’s shenanigans which of course would harm its image. Still, it’s one of the few theories that explain the sudden love for the Google Assistant button. But of course, there’s also a third option!

Theory 3: We’re all wrong and dedicated assistant buttons are actually awesome

As unbelievable as that sounds, we must consider all the possibilities! Sure, we hear a lot of complaints about the Bixby button getting in the way, and things probably aren’t much different when it comes to the GA button as well. However, this might be one of those cases where the vocal minority is twisting the public perception about an issue. What if for every person that dislikes the feature there are ten that like it?

Of course, we don’t have the data, but Google must have spent a lot of time and resources researching and analyzing the concept before it decided to go through with the mass integration. And when you think about, if the GA button does find its way into the hands of 100 million people this year, statistically speaking, even accidental presses are bound to make some people start using Google Assistant more often. After all, while for people following tech voice assistants have been a given for years, many users are still unfamiliar with the concept and don’t even know they have it on their phones.

Moreover, having the voice assistant complete tasks for you does feel in a way like you’re living in the future. Maybe with wider adoption, people will stop feeling so awkward talking to their phones and soon we’ll see the button as a convenience rather than an annoyance.

The dedicated button gives the voice assistant a physical representation, it makes it a part of the phone you can see and interact with, making it an integral part of your device. Now, with Google in the game as well, it’s also becoming a way to mark your territory. Bixby says “This phone is a Samsung first and an Android phone second!” while GA says “This phone is all about Android!”

It will be interesting to see where this rivalry will take us and if Huawei, now the world’s second largest manufacturer, will enter the scene with a capable voice assistant of its own. I’m guessing if that happens, it will come with a hardware button right off the bat.

Apple Acquires Startup Laserlike To Boost Siri: Report

Apple has acquired a startup specializing in machine-learning technologies as part of the ramp up of its artificial intelligence group led by a former Google executive, according to a report.

The startup, Laserlike, was founded by three ex-Google engineers and will join the AI group headed by Google’s former AI chief, John Giannandrea, The Information reported Wednesday.

[Related: New Apple Smart Home Business Leader Jadallah Has Channel Cred]

Apple, which did not immediately respond to a request for comment, reportedly confirmed the acquisition to The Information.

The acquisition could give a boost to Apple’s efforts at developing its voice-controlled assistant Siri, which has lagged behind Amazon’s Alexa and the Google Assistant on AI capabilities despite having a significant headstart.

Giannandrea was brought on in April 2018 to head up Apple’s machine learning and AI strategy, including Siri. He has since carried out a “significant overhaul” of the Siri team, The Information reported.

Founded in 2015, Mountain View, Calif.-based Laserlike had raised $24.1 million in funding, according to funding database Crunchbase.

The startup’s technology has focused on providing users with “high quality information and diverse perspectives on any topic from the entire web,” Laserlike said on its website.

The company has developed a “web scale” platform for content search, discovery and personalization using machine-learning technologies, the Laserlike website says.

YouTube Music comes to Google Home in many more countries (Europe, Japan, Canada)

As the story goes, Google isn’t particularly good at making its own services and apps work together. Take YouTube Music for example. It took months for the streaming service to be available on Android Auto and as an alarm provider in the Clock app, and it still isn’t integrated in Google Maps the way Spotify and Play Music are. But things are ever-so-slowly improving. You can (now) pick YouTube Music to be your music provider on Google Assistant and Home speakers in many, many more countries.

Previously, only users in the US, UK, Australia, and Mexico saw the option of YouTube Music in their Assistant settings. But almost all countries where both Google Home and YouTube Music are officially available have it now:

  • Canada
  • Denmark
  • France
  • Germany
  • Italy
  • Japan
  • Netherlands
  • Norway
  • Spain
  • Sweden

South Korea still only has access to YouTube Premium. And both India and Singapore don’t have YouTube Music yet, so it’s understandable the music service isn’t accessible on Google Home.

Full story: https://www.androidpolice.com/2019/03/08/youtube-music-comes-to-google-home-in-many-more-countries-europe-japan-canada/

Google Assistant introduces radio – listen to radio on Google Home

With Google Home, you can use your voice to listen to terrestrial or internet/satellite radio stations through Google Home or a TV or speaker with Chromecast built-in. Use your voice to listen to radio. Here are some ways to talk with your Google Assistant on Google Home when choosing and listening to terrestrial and internet/satellite radio.

 

https://support.google.com/googlehome/answer/7071793?utm_source=seasonal&utm_medium=email_crm&utm_campaign=GS102133&utm_term=radio_learnmore&utm_content=110196761_13

Ask Google Assistant to translate your conversation with someone who doesn’t speak your language

Translate conversations with interpreter mode – in real time! You can ask the Google Assistant to translate your conversation with someone who doesn’t speak your language. For now, you must use English, French, German, Italian, Japanese, or Spanish to start using interpreter mode. After you’ve started interpreter mode, you can ask the Google Assistant to translate between more languages.

How to Translate a conversation

  1. Say “Ok Google.”
  2. Say a command, like:
    • Be my Italian interpreter.
    • Help me speak Spanish.
    • Interpret from Polish to Dutch.
    • Chinese interpreter.
    • Turn on interpreter mode.
  3. If you haven’t identified languages, choose which languages you want to use.
  4. When you hear the tone, start speaking in either language. You don’t have to alternate between languages for interpreter mode to work.

On a Smart Display, you’ll both see and hear the translated conversation.

To stop using interpreter mode, say a command like:

  • Stop.
  • Quit.
  • Exit.

On a Smart Display, you can also swipe from left to right to stop interpreter mode.

Devices you can use

  • All Google Home speakers
  • Some speakers with Google Assistant built-in
  • All Smart Displays

Languages you can translate between

You can ask the Google Assistant to translate into any of the following languages.

  • Czech
  • Danish
  • Dutch
  • English
  • Finnish
  • French
  • German
  • Greek
  • Hindi
  • Hungarian
  • Indonesian
  • Italian
  • Japanese
  • Korean
  • Mandarin
  • Polish
  • Portuguese
  • Romanian
  • Russian
  • Slovak
  • Spanish
  • Swedish
  • Thai
  • Turkish
  • Ukrainian
  • Vietnamese

Guardian news to start your day, adapted for Google Assistant

We have generated an audio news summary by blending human and synthetic voices

The smart speaker is reinvigorating the news summary format common in broadcast with quick headlines and context consumable in a compact package. We at the Lab are experimenting with synthesising such a bulletin, designed for Google Assistant and based on existing Guardian journalism and curation.

You can check it out on Google Home devices or through the Google Assistant on your smartphone by saying, “Hey Google, talk to the Guardian Briefing.”

Although we make in-depth podcasts, the Guardian does not produce anything appropriate for this format in audio. We do, however, create visual, predominantly text-based packages in the form of newsletters and morning briefings. The Lab is attempting to create this well-understood audio summary content by blending human and synthetic voices, harnessing the power of text-to-speech technology on the Assistant platform.

Smart speakers create new habits around old formats

The news bulletin is almost as old as radio itself. However, research shows smart speakers are rejuvenating this format. According to a 2018 Voicebot.ai study, 63% of smart speaker owners use the device daily and 34% have multiple interactions per day, creating new habits.

When users incorporate news into their daily lives, they are often looking for specific lengths of content. More than half of smart speaker owners want the latest news on a regular basis, according to the Smart audio report by NPR and Edison Research, but many would prefer shorter formats.

Demand for short up-to-date bulletins is highest at the start and end of the day. The Future of voice report by Reuters suggests the majority of news usage is in the mornings, where fresh routines are emerging, and last thing at night. Regular listeners of news updates say they like the brevity, the control and the focus.

Generating an audio bulletin

While traditional broadcasters have a range of products ready to plug into these slots, we had to think about how we might create a suitable package of content. There are services available to help adapt content for smart speakers, and of course it’s also possible to have someone record a regular update. While we think these are viable options, we were excited by the possibility of creating a new audio product for the Guardian through automation by combining rich audio and text-to-speech technology.

We began by looking at Morning Briefing content to leverage our journalism and curation, rather than simply grabbing the headlines. Through daily iteration, the team crafted a set of rules to structure that content by combing headlines with supporting text. This newly structured data was then inserted into an SSML template alongside rich audio and blended with music.

Tuning the template was an exercise in sound design, as our editorial lead tweaked the speed, prosody, and experimented with variations of the Assistant’s voice, including the new Wavenet-based options.

While our research shows prolonged interactions with a synthetic voice is taxing and less pleasant than listening to a human voice, harmonising them creates a more congenial aesthetic. The Guardian Briefing attempts to utilise the best of rich audio and text-to-speech. Relying on automation and the synthetic voice has drawbacks in terms of quality control and aesthetics. Yet we were impressed by the speed and flexibility of the text-to-speech approach.

Measuring success

As this product is about filling slots created by new habits, we will be using retention as our key metric. How often do users come back? Do they continue to follow the Briefing over time?

In the coming weeks, we will be examining this data. We will also try to improve the content of the Briefing by adding localisation options for the US and Australia as well as exploring visual expressions through multimodal design.

Give it a try and let us know what you think.

Find out more about the Voice Lab’s mission or get in touch at voicelab@theguardian.com.

3 Ways AI is Changing Real Estate

Is the real estate industry ready to embrace AI?

It used to be that game-changing innovations would come once per lifetime at best. For example, over 500 years passed between Gutenberg’s invention of the printing press and the first digital printer. There was a time when generation after generation was effectively dealing with the same tools and technologies as the generation before them.

But times have changed, and now our world is disrupted and rapidly reinvented on an almost daily basis. Changes that used to occur over several generations can now happen in a decade or less. This is true across the globe and in every industry. No one is safe. And if the rapid rise of social networking has taught us anything, it’s that the real estate industry is no different.

One of the biggest new technologies of the 21st century is artificial intelligence, the process by which computers are imbued with the ability to “think” like an intelligent being. To the uninitiated, this might sound a little too reminiscent of bad sci-fi movies, but the truth is that AI can “look” like any other piece of software. In fact, if someone built a robot, programmed it to use artificial intelligence and then sent it after you, you could easily outwit it by climbing a flight of stairs. But if they pitted it against the world’s best Go player, they’d be in trouble.

Artificial intelligence and real estate might not seem like a decent fit at first, but think about it. If a piece of AI software can beat some of the most intelligent human minds on the planet, it could pose a serious threat to the average realtor.

If you work in real estate, then, you need to be aware of AI even if it’s just as a potential threat. The good news is that AI is actually at its most effective when it’s working alongside humans instead of replacing them, and the most successful real estate companies are likely to be the ones which accept this and which embrace AI and use it to revolutionize the way they do business. Here’s why.

AI can increase the relevance of recommendations

When it comes to the real estate industry, the best way to get the best chance of securing a sale is to provide clients with the property that’s perfect for them. Getting to know your clients will still require talented salespeople who are able to talk to them, to interpret their pain points and to find out exactly what they’re looking for. They’ll also need to be able to convert what they learn into data that an algorithm can understand.

The AI will then be able to take over by personalizing every customer interaction, bringing all of your marketing activity together in such a way that it can aggregate the data and figure out what will work best at every touchpoint. For example, it can tailor its messaging and the imagery it uses based on what’s worked well for other, similar customers.

This will be particularly important when it comes to searching and browsing websites. If an AI-based algorithm can help to surface the perfect properties for each different viewer, you’ll quickly become the go-to realtor in your area and beyond. It’s all about finding smart ways to help AI to help you to do what you’re already doing – but at a much more effective level.

AI can help you to better sell to people

Another of the advantages of AI is that it never sleeps. That means you can use AI-based bots to provide 24/7 coverage to customers who visit your website or your social networking profiles. They can “chat” with customers on your behalf and help you to make money even when you’re asleep, and the office is closed.

Chatbots have been gaining in popularity for several years, and they’re finally at the point at which they can (allegedly) pass the Turing Test. In other words, the best chat bots are now effectively indistinguishable from real people, and they can come in super useful when it comes to the way that we interact with realtors.

As far back as 2016, Inman conducted an experiment called Broker vs. Bot. They essentially asked realtors to compete with an AI bot called Find More Genius to see who could make the best recommendations. Perhaps inevitably, the real estate journalist who was acting as the buyer picked out the properties that were suggested by the AI bot.

The good news is that real estate agents will still be needed to draft contracts, answer telephones and show people around properties. It’s just that they can use AI to automate much of the work and to free up their time to spend it on more profitable tasks.

Long-term relationship building

People don’t just use a real estate company at one specific time in their life. Circumstances change and people need to sell their properties and move on or even decide to purchase second and third properties as investments or to rent them out. But despite that, realtors have historically focused on the here and now without thinking about the future. It seems like an inefficient use of time when there are sales to chase.

Now though, thanks to AI, they can continue to develop these long-term relationships through AI-based customer relationship management (CRM) systems. In fact, the technology is now so good that AI can predict whether people are likely to default on their loan or to fail a credit check. And this, of course, helps to cut down on delays and even allows real estate firms to function more profitably.

AI can even be used in the field of property management to monitor vital metrics and to predict when maintenance will be required or when errors might occur. It can even be used to monitor specific geographic areas and long-term trends in crime rates, property prices and more.

So it’s clear, then, that AI and the field of real estate are a match made in heaven, and we shouldn’t be surprised as they continue to grow closer and closer together in the months and years to come. The real question is whether you’re going to be able to ride the wave and take advantage of new technologies or whether you’re going to get left behind. It’s up to you.

Atlantic Broadband Offers Amazon Alexa Integration

QUINCY, Mass. — Atlantic Broadband, the nation’s ninth largest cable operator, today announced that Amazon Alexa voice control functionality is now available through its TiVo-powered video platform.

The new enhancement allows Atlantic Broadband TiVo customers with an Amazon Alexa voice assistant device to issue hands-free voice commands from anywhere in a room (“far-field”) without the need for a remote control. This functionality allows customers to easily pause, rewind and fast-forward programming. They can also change the channel and open streaming apps like Netflix through Amazon Alexa on their TV with simple voice commands.

Alexa, the top-selling virtual assistant in the United States, uses an intelligent, natural language, speech recognition system to deliver commands when paired to the TiVo unit.

“This new Amazon Alexa integration with our TiVo-powered video platform combines ease of use with powerful functionality to dramatically elevate the in-home TV entertainment experience for our customers,” said Heather McCallion, Vice President of Products and Programming for Atlantic Broadband. “Today we’re able to deliver even greater value to our customers through innovative virtual assistant technology.”

Atlantic Broadband was among the first multi-system cable operators in the U.S. to launch TiVo’s advanced gateway platform in 2013. The following year, it was among the first video providers in the nation to fully integrate Netflix into its video platform. Last summer, Atlantic Broadband introduced a new user interface for the TiVo platform featuring intuitive navigation and enhanced functionality, hyper-personalized viewing recommendations and an easy-to-use remote with voice control.

Atlantic Broadband

How to easily get Google Assistant on your Bixby Button

We were very happy when we heard Samsung had finally relented and allowed users to remap the Bixby button to another app.

Unfortunately, it seems Samsung still wanted to restrict competition to its Bixby voice assistant, and would not allow one to set the button to Cortana or Google Assistant for example.

It did not take long to work around this issue, however, with XDA-Developers releasing a small stub app called Bixby Button Assistant Remapper which can be selected as a target application, and which will then automatically run your chosen voice assistant (after selecting the default once).

The app works with Cortana and Google Assistant, but unfortunately not Alexa.

Read the full instructions and download the APK from XDA-Developers here.

Artificial Intelligence: No Longer a Business Want but a Need Instead

Technology and innovation for the enterprise is developing at such an extraordinary rate that it’s become challenging for businesses to keep up. With each new advancement, resources are needed to research, invest, and integrate with existing systems and infrastructure. The biggest advancements and overall game changers are artificial intelligence (AI) and machine learning (ML), which have the capability to transform businesses and industries so greatly that they can no longer afford for it to simply be aspirational.

As we look further ahead to late-2019, 2020 and 2021, the need for AI/ML will be essential for businesses to stay competitive and will become the standard process companies will look towards to make intelligent decisions based on quantifiable data rather than traditional industry knowledge and executive instinct.

The future of AI: quantifiable data as the new gold standard

Most companies are already utilizing AI/ML in some form to generate revenue but now, given advanced compute power and more robust machine learning platforms, they will need to go broader and deeper. Accelerating operational efficiency, as well as providing B2C and B2B dialogues that are highly targeted and meaningful, will be required just to stay on par with the competition. To create the competitive advantage needed to succeed, companies will need to prioritize data-driven models and get creative with the intended outcomes.

Beginning in late-2019, we will see exponential increases in the applied use of AI across every business discipline as quality data sets become more uniformed and accessible. Once the process is established of identifying use cases and a methodology to POV, to eventual broad adoption, it will become repeatable yet modifiable based on need. As the outcomes of AI/ML become prolificate among multi-disciplines within an organization, we will see the dependency on those data-based decisions increase and with that, a new way of conducting business will emerge.

Yet this will be slow to adopt given AI/ML has a mass problem with customization. Each problem requires specific data sets and its own algorithm based on weighted features. With that, it becomes difficult to replicate without modification for each use case. Even though machines will process most of the work, preparations to get to workable algorithms will remain quite manual, requiring expensive skill sets on the parts of data scientists and architects, and man hours across multiple organizations considering data will come from multiple sources.

Eventually, once formulas are established and implemented, the rate of positive change – either internally or externally with customers – will prove to be a huge competitive advantage for businesses.

As industry-specific use cases become known and distributed, they will become part of the business ecosystem as well. Previously, we relied on intuitions and experiences to read the market and understand what customers want, but it wasn’t strong, scientific, math-based decision-making. That old model has evolved.

Going forward, C-suite decision makers will start requiring that there is quantifiable data behind any major decision. It’s going to become part of how we engage with our customer base to achieve a much more targeted approach. A classic example of applied AI/ML is the consumer-based company, Zillow. By creating an algorithm that weighs multiple features of residential home sales, Zillow is inching close to becoming the standard pricing platform, eliminating the estimates typically driven by the experience of the real-estate agent. This is a good example of industry knowledge being augmented and somewhat replaced by intelligent data.

Make sure to avoid “paralysis by analysis”

The availability of data and quantifiable results from AI can have its drawbacks. Once AI becomes more mainstream in the months and years ahead and businesses adjust to these processes and see stronger outcomes as a result, they will require more decisions be backed up by quantifiable data. This slows us down if we rely too much on numbers and lose our ability to react based on experience and intuition. We’re seeing this all the time in our daily business environment. The fear of making a wrong decision subsides when there are more calculations to justify and predict better outcomes.

To overcome the notion of “paralysis by analysis,” businesses will need to define degrees of priority, engagement scope and overall potential ROI for each project. These will ultimately drive activity and management around the process. In late-2019 and beyond, striking the right balance between leveraging quantifiable data and avoiding paralysis by analysis will be critical for organizations competing in the data space. This, in tandem with AI’s significant growth, will help businesses establish a more critical edge.

Full article: https://insidebigdata.com/2019/03/02/artificial-intelligence-no-longer-a-business-want-but-a-need-instead/