Yesterday, Microsoft and Amazon announced a public preview of the integration of their intelligent digital assistants, Cortana and Alexa for US users. Both Cortana and Alexa allow each digital assistant to summon each other and access additional apps and services on their Windows 10 PCs and Harman Kardon Invoke speakers.
Why did Microsoft and Amazon collaborate Cortana and Alexa?
“I want them to have access to as many of those A.I.s as possible”, said Jeff Bezos in an interview with The New York times, while putting forth his vision for users to communicate with AIs as they do with their friends while asking them recommendations about a good restaurant or a famous hiking place nearby, and so on.
He further stated, “The world is big and so multifaceted. There are going to be multiple successful intelligent agents, each with access to different sets of data and with different specialized skill areas. Together, their strengths will complement each other and provide customers with a richer and even more helpful experience.”
Satya Nadella, CEO, Microsoft added,”Bringing Cortana’s knowledge, Office 365 integration, commitments, and reminders to Alexa is a great step toward that goal”.
Cortana users can have another way of making their lives easier with a great shopping experience. For instance, if you’re at work but remember you have to get soft drinks for a dinner party in the evening and you’re using their Windows 10 PC, iPhone or Android phone, you can simply ask Alexa to order soft drinks using the preferred payment method for their Amazon account.
“Alexa, open Cortana”
“Hey Cortana, open Alexa”
For trying out this exciting update for Alexa on Cortana, and vice versa, you can simply say “Hey Cortana, open Alexa” on a Windows 10 PC, or “Alexa, open Cortana” on an Echo device.
As explained by Amazon in one of its recent posts, “The goal is to have two integrated digital assistants who can carry out tasks across different dimensions of daily life — at home or work, and on whatever device is most convenient. Currently, Cortana and Alexa can each be enabled as a skill on the other.”
In Microsoft Office 365 users can simply ask Cortana to summon Alexa through a PC at work or can use Alexa to order groceries, adjust the thermostat before heading home for the day. Also before heading to work, one could enlist Cortana through an Echo device to preview a daily calendar, add an item to a to-do list or check for new emails while making breakfast in the kitchen.
As a part of this latest public preview, users can freely offer their feedback on how they can help both the communities in improving the Alexa+Cortana experience.
The feedback will be based on what the users like, what they did not, and what features they use the most. With customer feedback, the experience will keep getting better and more precise as more people use it and as changes are updated to the underlying algorithms.
“Engineers will use feedback from the public preview to deepen the collaboration between Cortana and Alexa”, stated Jennifer Langston in Amazon’s official post.
The basic idea behind the new Google-powered Smart Displays that are coming this summer is simple: take a Google Home smart speaker and put a screen on it, just like Amazon’s Echo Show. Really if that’s all you take away from this review, you’ve got the basics.
The Lenovo Smart Display is the first of these new devices on the market. LG, Sony, and JBL have also signed on to make them. Lenovo’s version goes on sale July 27th, priced at $199 for a model with an 8-inch screen and $249 for the model I tested, which has a 10-inch display.
There’s something more going on here than just a screen for the Google Assistant, though. These Smart Displays run Android Things, a newish operating system based on Android and designed for Internet of Things devices. That means Google has a new canvas for its virtual Assistant to work with, unencumbered with the need to support any of the cruft that would come along with running full Android or Chrome.
With that blank slate comes an opportunity for Google to make exactly the thing it wants to make. And the result is something I wasn’t really expecting: a Google appliance.
Voice assistants are the next big thing. Some say they’re the next mobile, though I don’t even know if that’s accurate or an understatement. All the major platform companies have one, and startups building them appear ever faster, making it hard to even keep track of everything. The point is, they are going to be everywhere and are going to dominate the way we interact with our computers. Yet I hear many questioning if these assistants are even viable from a business perspective. The argument goes that by moving people away from screens, assistants may be diminishing traditional screen based revenue streams. How is Google going to sell ads along their search results if the user gets taken directly to the information they desire without ever looking at a list of results?
Content providers may indeed have a harder time turning their work into paychecks. If you’re running a blog or publication, your main business is placing ads next to your reporting. When more people move away from screens and have their news read to them by an AI instead, less people will see your ads. Though if that is something people are actually going to do in significant quantities remains to be seen. For the companies operating the voice assistants, however, they will become a gold mine. Even better, their value proposition for the customer is precisely what makes them valuable for the operating businesses.
All it takes to cash in is a simple two step plan:
THE PATH OF GREATEST CONVENIENCE
The first part of the story is about getting to market dominance, or at least gaining a significant share of the market, and growing the overall market volume at the same time.
Essentially, companies are trying to get as many people as possible to use their system. And get these people to use their system as much as possible. That’s why at the moment, all weight is behind making these voice assistants useful and their interaction feel natural. We’re supposed to get used to talking to robots. The useful part of this puzzle is about handling relevant key use cases on the one hand, while supporting action in an extremely broad area of tasks on the other. As with most innovations, a few key functions are what people really want and what keeps them coming back. Still, it has to be universally useful and support the user with whatever she wants to accomplish. This is especially important for voice. Without any visual cues for the available functionality, all the user is left with is trial and error. Every missed command the system can’t understand or act upon is disappointing for the user. Get a command wrong one to many times and you will be frustrated enough to not use it anymore.
So as long as no one entity creates an assistant that handles everything, there is a demand, and possible place for coexistence, for many T-shaped ones. The crucial part here, however, is acknowledging coexistence, knowing strengths and weaknesses and facilitating actions and responses between these systems. Imagine an assistant that is great at smart-home coordination and controlling, but has no exceptional skills in most other departments. By it self, it seems not too convenient and you might instead turn to one that does pretty well generally, but is just a bit weaker in smart-home stuff. However, if this first one is now also able to facilitate between different assistants, that could become a whole different story. It could call Google Assistant for knowledge questions, Amazon’s Alexa for shopping tasks, and so on and so forth. Now that would be pretty convenient! As soon as an assistant handles core functions well enough, and delights instead of disappoints in most other, more general requests, people might actually use it to an extent and volume that makes it interesting for companies.
The part about conversations feeling natural is just as important. It needs two things for conversations with machines to feel natural: speech synthesis and conversation flow.
Speech synthesis, nowadays, describes the action of a computer producing actual “spoken” sounds from written words and data. This begins with arranging pre recorded syllables one by one and becomes ever more complicated when incorporating important traits of our languages, such as intonation and flow. Technology has gotten really good at this as you can see in currently available voice assistants. While in most cases you can still easily tell that you’re talking to a robot, speech synthesis has reached a state of being good enough to have a conversation. You can clearly understand what the machine is trying to say without the sound of it coming off as a distraction of any sort.
The next big challenges in the field are about making sounds even more human. Robots are good at communicating facts, but conversation is about so much more than plain facts. We use speech to direct attention, convey emotions and carry more meaning than the individual words. Getting our robots to follow conversational conventions by producing and using all these stylistic measures correctly and effectively is the current area of focus in the field. And one where our robot friends still have a lot to learn.
Conversation flow describes, in simple terms, how well the conversation is going overall. For proper conversation flow it takes both parties to be benevolently engaged, actively listening and understanding. Let’s break that down:
benevolently engaged: wanting the best outcome for the other party and taking action towards this goal
listening: being focused and hearing what the other party is saying
understanding: recognizing and comprehending both literal and contextual/tonal information
Listening here translates to microphone technology and speech to text transcription. While there is still a lot of room for improvement, at the basic level it’s solved. The other two is where it gets complicated. When it comes to voice assistants, that means that even if they can’t do everything you want them to do, they at the very least have to understand what it is that you mean and try their best to help you reach your goal some other way. This is where assistants go wrong at the moment. Just stray a tad too far from the predefined functional path and you could just as well be talking gibberish. But even when they understand what you’re saying, keeping a healthy exchange alive, offering assistance and information where you didn’t expect it and asking relevant questions to grasp context is still a huge problem that needs to be solved.
Once a voice interface reaches a certain threshold of both, being useful and leading conversation that feels natural, it has the potential to reach an incredible amount of users. And reach them on a more personal level than it is possible now. In conversations with robots, even simple ones, humans tend to assign human traits to the machine. We put meaning, feelings and intentions to words where there are none. This is called the ELIZA effect and scientifically well established, this article by Chatbots Magazine explains it in simple terms. If our assistant acts nicely and reacts benevolently to our requests, we can’t help but put trust in that it’s caring for our best interest. Knowing that it’s a machine, and about the contradiction of these statements, surprisingly doesn’t even diminish our trust. And here’s the fun part: Combining usefulness with trust leads to heavy use, and more importantly, people opening up and giving more information about themselves and their wants.
At its AWS re:Invent conference last week, Amazon announced two new monetization updates for Alexa skill developers: in-skill purchases and the ability to make payments via Amazon Pay.
In-skill purchases will enable developers to sell premium content or digital subscriptions within their skills, and the extension of Amazon Pay will allow third-party developers to accept Amazon Pay for in-skill purchases.
The new tools will allow developers to generate revenue from voice apps, and ultimately pay off for Amazon. Monetization has been challenging for voice app developers across platforms, but Amazon has been making a concerted effort to enable developers to earn revenue, most notably through voice app subscriptions. While this model has helped Amazon emerge as an early leader with the most voice apps of any platform, the current solution is limited in scope. Implementing new monetization tools for Alexa skills bodes well for Amazon’s long-term success in the nascent voice assistant market:
It will likely encourage developers to focus on creating high-quality skills.The opportunity to make money via in-skill purchases will incentivize developers to improve the in-skill experience in order to retain users and increase usage. That’s because developers will need to add value to their skills to convince consumers to sign up for premium content.
It sets the foundation for mainstream adoption of Alexa skills.As developers make their skills more engaging in order to drive in-app revenue, consumers may find more value in Alexa over rival voice assistants, allowing Amazon to gain a stronger footing in the market. While it’s likely that Amazon’s competitors will follow suit in improving monetization efforts, Amazon will see a first mover advantage.
Although Amazon is focusing on monetization solutions, the biggest problem Alexa skill developers face is user acquisition. One of the main issues with Alexa is discoverability of skills; users simply don’t know how to use Alexa beyond making simple commands, like asking the voice assistant to play music or read a weather update. For example, while Alexa has access to more than 25,000 skills, about 53% of consumers use only one to three of them, while 14% of consumers haven’t even enabled a third-party skill, according to Dashbot. To drive greater adoption of Alexa, Amazon will need to provide more visibility for third-party skills from businesses and developers.
Smart speakers are the latest device category poised to take a chunk of our increasingly digital lives. These devices are made primarily for the home and execute a user’s voice commands via an integrated digital assistant. These digital assistants can play music, answer questions, and control other devices within a user’s home, among other things.
The central question for this new product category is not when they will take off, but which devices will rise to the top. To answer this question, BI Intelligence, Business Insider’s premium research service, surveyed our leading-edge consumer panel, gathering exclusive data on Amazon’s recently released Echo Show and Echo Look, as well as Apple’s HomePod.
Peter Newman, research analyst for BI Intelligence, has put together a Smart Speaker report that analyzes the market potential of the Echo Look, Echo Show, and HomePod. Using exclusive survey data, this report evaluates each device’s potential for adoption based on four criteria: awareness, excitement, usefulness, and purchase intent. Finally, the report draws some inferences from our data about the direction the smart speaker market could take from here.
Here are some of the key takeaways:
Amazon’s new Echo Show is the big winner — it has mass-market appeal and looks like it will take off. The combination of usefulness and excitement will drive consumers to buy the Echo Show. The Echo Look, though, seems like it will struggle to attract that same level of interest.
Apple’s HomePod looks likely to find a place in the smart speaker market but won’t dominate its space like the iPhone or iPad did.
The smart speaker market will evolve rapidly in the next few years, with more devices featuring screens, a variety of more focused products emerging, and eventually, the voice assistant moving beyond the smart speaker.
In full, the report:
Showcases exclusive survey data on initial consumer reactions to the Echo Look, Echo Show, and HomePod.
Highlights the aims and strategies of major players in the smart speaker market.
Provides analysis on the direction this nascent market will take and the opportunity for companies considering a move into the space.
Today, people spend too much of their day on tedious tasks at work, like managing their calendars, dialing in to meetings, or searching for information. But Amazon Alexa can help solve this problem by acting as an intelligent assistant at work. Alexa lets people use their voice to interact with technology so they can spontaneously ask questions in a way that feels natural. Alexa can help people take care of these tasks just by asking. Alexa can help people stay organized and focused on the things that matter, whether they are working in their office or at home. Alexa can simplify conference rooms, allowing meeting attendees to start meetings and control the equipment in the room by simply using their voice. Alexa can also do things around the workplace, like providing directions to a conference room, notifying IT about a broken printer, or placing an order for office supplies.
Alexa helps you at your desk
Alexa lets you be more productive throughout your day and stay focused on important tasks. Alexa can help you manage your schedule, keep track of your to-do list, and set reminders. Alexa can automatically dial into your conference calls and make phone calls for you. Alexa can help quickly find information for you, like the latest sales data, or the inventory levels in your warehouse.
Alexa simplifies your conference rooms
Alexa lets you start meetings and control your conference room settings using your voice. With Alexa, you don’t need to use remote controls, look up conference call information, and manually dial in to meetings – you can simply say “Alexa, start my meeting”, and Alexa gets your meeting started. Alexa-enabled devices can act as audio conferencing devices in smaller conference rooms, or control equipment in larger rooms.
Alexa helps you around the workplace
Alexa helps your workplace run more efficiently. By building your own custom Alexa Skills, you can easily voice-enable your workplace, and let Alexa help with common everyday tasks. Using your custom Alexa Skills, Alexa can provide directions, find an open meeting room, order new supplies, report building problems, or notify IT of an equipment issue. Alexa can also provide important information, like inventory levels, and help with on-the-job training.
Alexa adds voice to your products and services
Alexa lets you add voice to your products and services so you can provide rich, personalized voice experiences for your customers. Alexa can help hotel guests feel comfortable, play their favorite music, and even order room service. Alexa can provide customers with valuable information about your product, and provide support when they run into problems. With Alexa, you can redefine the way your customers interact with your products and services.
Alexa for Business makes it easy for you to use Alexa in your organization. Alexa for Business gives you the tools you need to manage Alexa-enabled devices, enroll your users, and assign skills at scale. You can build your own custom voice skills using the Alexa Skills Kit and the Alexa for Business APIs, and you can make these available as private skills for your organization.
With Alexa for Business, you can provide shared Alexa devices for anyone to use in common areas around your workplace, and personal Alexa devices for your employees to use. Shared devices allow Alexa to simplify conference rooms, and help around the office, and anyone can access them. Personal devices let Alexa help users be more productive throughout their day, at work or at home.
EASILY PROVISION AND MANAGE ALEXA DEVICES
Alexa for Business allows you to provision and manage Alexa devices in your organization from a centralized console. With Alexa for Business, you can easily provision multiple Alexa devices at the same time, and automatically connect them to your Alexa for Business account. You can specify device locations, enable a set of skills that can be used, and prevent users from tampering with them. This saves time because you don’t need to manage these devices individually.
CONFIGURE CONFERENCE ROOMS
Alexa for Business makes it easy for you to configure Alexa to control your conference rooms. Alexa for Business lets you specify the type of conferencing equipment you use and your preferred meeting applications, which allows Alexa to start most meetings, on most devices, in any room. You can use Alexa devices as audio conferencing devices in small conference rooms, or to control equipment in larger rooms. Alexa for Business is an open service, and the Alexa for Business APIs allow you to build skills so that Alexa can work with additional equipment or perform specific tasks in your conference rooms.
Alexa for Business allows you to invite your end users to enroll their personal Alexa account with your Alexa for Business account. This lets them continue to use the Alexa features and skills they’ve already enabled in their personal Alexa account, as well as the work skills you provide, on any of their devices, at work or at home. Alexa for Business gives you the ability to make work skills available and provide access to your corporate calendar system so that they can use Alexa to manage their calendar.
CREATE CUSTOM SKILLS
Alexa for Business lets you build your own private custom skills for your workplace, your employees, or your customers to use. You can make these skills available only to your shared Alexa devices, and your enrolled users. Alexa for Business provides an additional set of APIs that provide information about device location, which lets you add context to your skills. For example, you could build a skill that lets a user report a printer problem to IT, and the skill could use the device location so that IT knows which printer is broken. Building custom skills is easy, and the Alexa Skills Kit provides tools, documentation, and code samples to help you get started.
In today’s data centric world, users are asking more of their search engines than ever before. Advanced Intel® technology gives Microsoft Bing* the power of real-time AI to deliver more Intelligent Search to users every day. This requires incredibly compute intensive workloads that are accelerated by Microsoft’s AI platform Project Brainwave on Intel® Arria® 10 FPGAs. Learn how Microsoft Bing* deploys Intel® FPGAs’ efficient and flexible architecture to bring users more intelligent answers and better search results.
I have seen a glimpse of the future impact of artificial intelligence on corporate communications – and it is good. AI will bring a new level of trust to information, improve the way information is delivered (i.e., via augmented reality and virtual reality apps) and provide better insights and predictive analytics for decision making by corporate communications professionals.
My exposure to artificial intelligence has primarily been in the trusted identity technology industry, where AI is starting to revolutionize the digitization of identity and access management, physical access control and cybersecurity, especially as a proactive approach to threat and fraud detection. The management of identities, either physical or digital, is changing rapidly, requiring new ways of thinking to add trust.
Trust is an important topic for corporate communications, too. If people do not trust the information coming from a corporation, credibility is lost. If people think the communications team is out of touch with market realities, is too slow to take action or doesn’t have a vision for the future, then corporate communications unwittingly gets relegated to a tactical corner, subject to the misperceptions and misgivings of narrow-minded, tactics-obsessed, transactional people.
We as corporate communications professionals should expect more from ourselves and our teams. I challenge my colleagues in this field to embrace the opportunities AI presents to augment our communications function in the long term, rather than being defensive, reactionary or ignorant that change will happen.
For the sake of sparking a new stream of dialogue in the communications field, I am going to lay out my case for how AI could help corporate communications in the years to come. I hope readers challenge my points and stretch our collective thinking so we can have an honest discussion about how to harness AI in a productive way to serve our strategic communications goals.
AI does not need to define us or replace us; we have the opportunity to define AI in the context of corporate communications, which includes both external and internal communications. AI technology itself is neither good nor bad. In fact, it is a reflection of the heart of the person using it or unleashing it through automation. Just as the internet has done for more than two decades, it can reveal as much moral clarity as it can moral depravity. Someone can use the internet to spread false or misleading information just as much as to post truthful information.
However, at a higher level, AI’s real value is in enhancing, supporting and amplifying human truth, human experience and, ultimately, human freedom. And I believe that organizations will increasingly become the purveyors of these things in the future. Ironically, AI can help enhance what it means for us to be human.
Full article at: https://www.forbes.com/sites/forbescommunicationscouncil/people/anthonypetrucci/#1f86c7269f50
Alexa is impressive, but it’s still limited to the capabilities Amazon has given it. Where Alexa truly shines is with its Skills, which are third-party apps that give it all sorts of new abilities.
Alexa Skills, like mobile apps, have the potential to make life and work easier and can be great for businesses from both an employee and customer perspective. This guide to Alexa Skills will tell you all you need to know about using, creating, and benefitting from these fantastic additions to Alexa’s capabilities.
What are Alexa Skills, and how can I enable Alexa Skills?
Amazon Alexa is a digital assistant, but it’s also a platform like iOS or Android. Like those more full-fledged systems, Alexa has apps that can extend its usefulness—Amazon calls those apps Skills.
Alexa Skills come in a variety of categories, including business & finance, productivity, news, weather, and more on Amazon’s Alexa Skills page. All Alexa Skills are free, though some require a subscription service to operate properly.
Unlike Google Home, which has a limited number of skills that are enabled on all units by default, Alexa Skills don’t come preinstalled. In order to gain access to a particular Skill, like CNET News, you have to ask Alexa to enable the particular skill, or click Enable on Amazon.com or in the Alexa mobile app.
You can also ask Alexa about particular categories of skills to have it list applicable popular ones, which is a good option if you’re exploring what’s available.
Alexa Skills can also be used in a completely different way—as part of Alexa for Business. Designed to integrate Alexa-controlled Amazon Echo units into offices, Alexa for Business comes with the tools needed for businesses to build custom skills to suit the needs of their environment.
Alexa Skills can be built to control meeting rooms, adjust smart thermostats, turn on lights—essentially anything that can be connected to Alexa can have a custom skill built for it.
Amazon Alexa is arguably the leading digital assistant on the market—it has more third-party connectivity options, it is more open, and it is available on a wider range of devices than Google Assistant, Siri, or Cortana.
Competing digital assistants don’t come close to matching Alexa in terms of skills—Siri and Cortana don’t have comparable app-like abilities. Google Assistant has Actions, but it has barely eclipsed 2,000 Actions, and Alexa has over 25,000 Skills. Alexa is also the only digital assistant to have a dedicated store for its Skills—look around online, and you’ll have an impossible time finding what’s available for Google Assistant. There may be more noise to get through on the crowded Alexa Skills store, but the ease of accessing new Skills is much easier with Alexa than competing platforms.
Until Google does a better job of advertising Actions, it’s going to have a hard time competing with Alexa Skills, which is just one reason why Alexa Skills matter: They’re the leader in apps for voice-activated digital assistants.
Tech companies including Google and Amazon are making big bets on the future of voice-activated technology, and it’s possible we’ll be talking to our computers even more within the next decade. With Alexa being the current leader for voice-controlled apps, it’s also the best place for developers and businesses thinking of working with Alexa Skills.
Assuming you’ve already visited the Alexa Skills store on Amazon’s website or the Android or iOS Alexa apps, using an enabled Alexa Skill requires knowing how to access it, which varies based on the type of Skill it is.
Typical skills that function like apps have an invocation name that is essential for using them. Most skills are accessed by asking Alexa to “open/play/start/ask [invocation name] [request],” which should activate the skill and give a response.
Other skills have different methods for activation—smart home skills in particular. Smart home Skills that integrate with Alexa don’t require an invocation to activate—instead, users make a request, such as “turn off the hallway light,” or “raise the temperature three degrees,” and Alexa relays the information to the applicable light, thermostat, or other smart home device as programmed in your smart home app.
News skills, like the aforementioned CNET News Alexa Skill, are in their own category; instead of calling them up manually, news skills are rolled into the Alexa Flash Briefing, which gives a rollup of all the top stories of the day when a user asks Alexa for it.
All publicly available Alexa Skills can be found on Amazon’s Alexa Skills website or in an Alexa mobile app.
Note: You can’t actually use Alexa Skills from inside the Alexa mobile app, but you can talk to Alexa in the Amazon apps for iOS and Android. The experience is best from an actual Alexa device.
How can SMBs and larger businesses use Alexa Skills?
Alexa Skills can benefit businesses from the single-person LLC to the 1,000+ enterprise—it’s all a matter of finding, or building, the Alexa Skills you need to do the work you do. The right Alexa Skills for an individual business will vary greatly based on the company’s size, need, and location (some Alexa Skills are only available in certain countries).
Small businesses will find plenty of Alexa Skills already available that make day-to-day work easier, for instance: The Expedia Alexa Skill can check flight and hotel availability; Newton Mail can read emails out loud, smart office lights can be controlled with Skills like Philips Hue, and thermostats can be controlled with Skills like those from Nest; and some third-party developers have even stepped up to provide Skills that connect Alexa with popular apps like Wunderlist.
Small offices don’t need to blow the budget with enterprise-level software and hardware to make Alexa Skills practical for use at work—all it takes is an Alexa device and some time spent browsing Amazon for the right Skills.
Larger businesses that want to go beyond using stock Alexa Skills can go a step further with Alexa for Business.
Built to be a complete enterprise tool for integrating Alexa into the office, Alexa for Business offers tools that go far beyond what’s available in the Alexa Skills store. It gives IT managers the ability to provision Alexa devices, manage voice services and users, and connect Alexa to dozens of software providers (including Salesforce, Zoom, Polycom, and more) that have created ways for Alexa to work with their products.
Alexa for Business customers can take advantage of public Alexa Skills, but it’s the private Alexa Skills that really make the platform stand out. Developers in an enterprise with an Alexa for Business subscription can use the Alexa Skills Kit to build Alexa Skills applicable to a particular business environment that are only available to a particular Alexa for Business instance.
There’s also an Alexa for Business API that further extends Alexa’s functionality by allowing businesses “to integrate Alexa for Business into your existing tools, automate administrative tasks, or build your own portals for tasks like user enrollment.”
Independent developers wondering how to get in on Alexa’s growth and business professionals using Alexa for Business to build private Alexa Skills both do so using the Alexa Skills Kit. (The Skills Kit is just one of three ways of developing for Alexa, but it’s what we’re going to focus on here. Check out the Alexa developers portal for info on using the Alexa Voice Service and the Alexa Smart Home and Gadgets tools.)
Here’s one fact about developing for Alexa that all developers will love: It’s language agnostic, at least to an extent. That’s right—The Alexa Skills Kit doesn’t care which language you use as long as it makes a call to the correct Alexa API. The one big exception is if you’re building smart home skills, which require an Amazon Lambda function, so they can only be written with Node.js, Java, Python, or C#.
Amazon breaks down the kinds of Alexa Skills that can be written into four categories:
Video Skills, which allow users to control streaming services and internet-connected video playback devices.
Custom Skills, which are pretty much everything else that isn’t one of the other three categories of Alexa Skills.
Flash briefing, smart home, and video Alexa Skills all have particular APIs to work with, making them much more straightforward than building a custom skill. Regardless, building custom Alexa Skills isn’t that complex—simply follow the steps laid out by Amazon and be sure you have an AWS account to use a Lambda function, or a cloud provider that allows web services connections over HTTPS.
Why should my business choose Alexa Skills over Actions on Google?
Google Assistant is gaining a lot of traction on Alexa, due in large part to the fact that it comes with most Android devices. Having the same digital assistant with the same capabilities in both a mobile device and a stationary smart speaker can seem appealing for consumers and business users.
There isn’t much real competition from Google for business dominance, though. Bloomberg reported in January 2018 that Google plans to go after Amazon’s consumer-facing Alexa services, and Google’s actions since then point much more to a smart home, family-centric system that isn’t trying to compete with Alexa for Business.
Google Assistant doesn’t have an enterprise-ready tool like Alexa for Business, which means any Google Home devices in the office would be stand-alone units with limited capabilities.
If your business is considering investment in a voice activated digital platform with a high degree of customization, there’s no contest: Go with Alexa and its diverse Skills tools.
Chatbots and virtual assistants may offer healthcare organizations a low-cost entry point into the burgeoning world of artificial intelligence, indicates a new cross-industry survey conducted by Spiceworks.
By 2019, up to 40 percent of large businesses are likely to integrate virtual assistants like Microsoft Cortana, Apple’s Siri, Amazon Alexa, or Google Assistant into their day-to-day workflows.
About a quarter of smaller and mid-size companies are planning to do the same as they search for more efficient ways to keep productivity high and engage in an increasingly tech-focused workplace.
Voice recognition tools are nothing new for the healthcare industry – many clinicians already rely heavily on voice tools for dictation – but chatbots and ambient computing devices offer a new level of interaction with entities that do more than record documentation.
Voice recognition and AI chatbot functions are being used to support both internal and external communications.
Forty-six percent of current users are leveraging voice-to-text functionalities, while 26 percent are helping their teams collaborate more efficiently through virtual assistants and communication features. Only 10 percent are engaging in data analytics tasks using these tools.
Perhaps surprisingly, only 14 percent of companies are currently using chatbots for customer service, although 20 percent said they are using them to “support their customer service departments” in some way.
Most organizations are deploying these tools and services in their IT departments instead. More than half (53 percent) of current users are letting their IT professionals take the lead, while 23 percent are using virtual assistant tools in their administrative divisions. Just 16 percent are employing AI chatbots and assistants in sales and marketing.
Reluctance to commit to wider deployments is driven by a number of familiar concerns, including privacy and security and insufficient use cases.
Twenty-nine percent of organizations are worried about the privacy and security of the data transmitted through these platforms, while 50 percent simply do not see an immediate application for the toolset.
Only 25 percent stated that the costs of these services are prohibitive, perhaps reflecting the speed and ease with which they have found their way into the consumer technology landscape.
Adoption numbers are expected to rise as artificial intelligence becomes more and more aligned with the needs of healthcare providers and other businesses, but many organizations feel unprepared to expand the role of AI in their workflows and processes.
More than three-quarters think AI will help to automate routine tasks that take up unnecessary time and manpower – up to 19 percent of these mundane jobs could eventually fall under the purview of AI, respondents said.
Interestingly, 19 percent of organizations have held off on implementation due to the belief that these tools would actually distract their workers and reduce productivity rates.
Either way, just 20 percent of participants think their organizations are prepared to handle this major shift.
A mere 5 percent of survey respondents said their companies value AI skillsets when making hiring decisions, which could imperil their eventual success in an increasingly AI-driven world. Even fewer organizations have developed usage policies or training programs for employees already engaged with these tools.
The mismatch between interest and investment is common when it comes to artificial intelligence.
In late 2017, the Center for Connected Medicine found that AI ranked lower as a priority for healthcare organizations than cybersecurity and precision medicine.
And early adopters responding to a recent HIMSS Analytics poll expressed ambivalence or disappointment in some of their cutting-edge deployments, noting that many AI tools are simply not ready for prime time, despite their promises for population health, clinical decision support, and patient diagnosis.
Forty-three percent of participants in that survey also cited unclear or unproven business cases as a reason to defer deployment, echoing the 53 percent of respondents to the Spiceworks poll who thought the same about chatbot tools.
The lack of maturity may also play a role in deciding whether ambient computing and voice recognition tools have meaningful applications in the business environment. Errors in communication are common, and may be particularly frustrating or even dangerous in the patient care environment.
Fifty-nine percent of respondents to the Spiceworks survey said their virtual assistants or chatbots have misunderstood their requests or misinterpreted the nuances of human dialogue.
Thirty percent said these AI companions have executed inaccurate commands, while 14 percent said they supplied inaccurate information.
Since most of the devices and services in use are commercial products that have not been developed in-house, there is little that these organizations can do to improve their interactions on their own.
That could be why only 7 percent of respondents are planning to spend more than $10,000 on AI technologies in 2018. Sixty percent of companies are not seeking to devote any budget at all to AI this year.
Voice-assisted artificial intelligence may become more popular as technology vendors integrate these interface options into their products more completely.
Especially in the healthcare space, where privacy and security are closely governed and patient safety is of paramount concern, ensuring that these tools are adequately developed and adapted to the clinical environment will be crucial.
If ambient computing devices can prove their benefits, they may offer an easier on-ramp into the world of artificial intelligence than is currently on offer with clinical decision support and diagnostic tools.
With a consumer-friendly focus and inherent integration into existing, familiar platforms and services, chatbots and virtual assistants could find a place in healthcare relatively quickly.
However, use cases for these devices and services must be clearly articulated and tested before most healthcare providers are likely to pursue large-scale adoption, putting the onus on vendors to put AI through its paces to ensure that they can support quality care, better patient interactions, and higher levels of provider productivity.
Add Amazon.com (AMZN) and its voice-activated Alexa digital assistant to the growing list of rivals aiming to take on the money transfer service of PayPal Holdings’ (PYPL) Venmo.
Shares in PayPal plunged 4% to close at 73.86 on the stock market today amid a Wall Street Journal report that Amazon is mulling whether to use Alexa to start a person-to-person payment service like Venmo. Amazon dropped 3.2% to 1,405.23, but that was after President Donald Trump said he would order a review of government policies that might affect Amazon.
Apple (AAPL) and Square (SQ) are among companies that have already launched Venmo-like services. Zelle, also a person-to-person payment app, was launched by 30 U.S. banks in September.
With Venmo, consumers link a bank account to a smartphone app and send money to friends or family using an email address.
Amazon uses artificial intelligence tools to make its Alexa digital assistant useful for shopping and other tasks. Alexa is built into Amazon’s popular Echo-branded home appliances. Echo owners would need to link their bank accounts to their Amazon account to enable money transfers.