<aside> ⚠️ Caution: Still Currently in Heavy Development
</aside>
Alexa.Presentation.APLA.RenderDocument
directive which was revealed at Alexa Live and can be used now on all skills & devices (no need to check the device's supported interfaces). See the blog post for more 👉responseBuilder
in ask-sdk for Node and instead of requiring voice developers to create APLA documents ahead of time, they can be created at runtime in the handler code, and without using datasources
.Install into your project, yarn or npm etc
yarn add apla-responder
npm install apla-responder
Include the dependency components near the top in your handler JS files where you want to use APLA.
const { AudioResponse, Components: Apla } = require('apla-responder');
handle(handlerInput) {
// some logic
const prompt = "Would you like another?";
const res = new AudioResponse(handlerInput);
res.speak("That's the correct answer! " + prompt)
// or res.speak("<speak>Can also include SSML like this instead of PlainText</speak>", "SSML");
res.repromptWith(prompt)
return res.getResponse(); //
}
getResponse()
returns ask-sdk responseBuilder's getResponse() result.<speak />
tags and using optional second parametergetResponseBuilder()
to continue adding further directives as usualhandle(handlerInput) {
// some logic
const prompt = "Which colour would you like?";
const slotName = "colour";
const res = new AudioResponse(handlerInput, "my-custom-directive-token");
res.speak("Nice! " + prompt);
res.repromptWith(prompt);
return res.getResponseBuilder() // and more directives using this method
.addElicitSlotDirective(slotName) // or other ask-sdk function etc
.getResponse();
}
const fanfareUrl = "<https://somepath.com/to/audio.mp3>";
const fanfare = new Apla.Audio(fanfareUrl);
handle(handlerInput) {
// some logic
const res = new AudioResponse(handlerInput);
res.playAudio(fanfare);
res.playAudio(fanfare**Url**);
res.speak("That's correct!");
return res.getResponse();
}
Behaviour
Prefer to use the JS class approach for individual components - e.g. new Apla.Audio()
so that more details can be passed in if/when there's more features available.
const fanfareUrl = "<https://somepath.com/to/audio.mp3>";
const fanfare = new Apla.Audio(fanfareUrl);
const applauseUrl = "<https://somepath.com/to/applause.mp3>";
const queenEntranceMixer = new Apla.Mixer([
fanfare,
new Apla.Speech("<speak><break time=\\"1s\\"/>Please welcome the Queen!</speak>", "SSML"),
new Apla.Audio(applause);
]);
handle(handlerInput) {
// some logic
const res = new AudioResponse(handlerInput);
res.useMixer(queenEntranceMixer); // this could come from a CMS or 'content' part of the voice app
res.silence(1500);
res.speak("The Queen then looked upon her subjects.");
return res.getResponse();
}
Behaviour
You may want to reuse certain soundscapes across multiple handlers/handler groups (I guess that's the whole point of APLA documents in the first place lol) so moving the content, such as queenEntranceMixer
to a different part of the voice app and returning it from a function may be the best way to go.
Last Updated: 28th July 2020
Introducing Alexa Presentation Language (APL) for Audio
https://github.com/fx-adr/apla-responder
If you want to see more flexibility, plz 👍 my feature request on alexa.uservoice.com to be able to use APLA for reprompts.
<aside> 👀 This is a live Notion document, you may see things change in front of you.
</aside>