Creating audio that works across browsers and mobile platforms is a challenging task. After spending several days attempting to make it work, I ran into an impasse on iOS. So I turned to a new concept: audio sprites. It works!
Before I start posting code samples, some bottom line conclusions about getting audio tag to work in HTML5 across platforms:
You will need multiple audio file formats to support different platforms. W3schools has a good list of audio formats supported by different browsers.
A combination of mp3 and wav will do the job and that is what I have chosen.
When using audio for the purpose of sound effects in an app, do not attempt to break them into individual files. Instead, create a single file with well-defined break points you will be able to seek to and find the sound you need.
Be sure to add nice-sized periods of silence between sounds to ensure you have some leeway in arriving at the right place within the audio file.
Audacity is a great tool to do the necessary sound editing, despite the painful UI (really painful; you've been warned).
Allow a couple of weeks within your project for a sound prototype - things don't always work the way you hope - but things do work out in the end. Perhaps, knowing this, you won't blow the timeframe.
So why HTML5 audio instead of flash? Flash is an awesome tool of the past. HTML5 is young, but it'll get there. Invest in the right solution early. And like it or not, Apple has made its intentions known and it will win. Flash has already been dealt its fatal blow.
Now to the fun part. Audio sprites are a parallel of css sprites: a single file that you can use to display (or in our case, stream) the portion needed for the specific item. It can be used multiple times and therefore contains data for potentially different or even unrelated applications.
I discovered the concept of css sprites at Remy Sharp's blog, in which he posted the results of a monumental effort in getting HTML audio to work within iOS. Two and a half years later, this is not a day out of date.
I have made some changes to his code and will describe the results here.
My biggest discovery was that one needs to put some silence between individual sound bytes and plan to "trim" the edges.
Snapshot of an audio file with
silent edges shown in dark grey
I modified the code to take the size of an edge as a parameter and take them into account both when starting audio and stopping. Different browsers will do an ever-so-slightly different thing with respect to timing, and you will lose that battle unless you play it safe. We are talking fractions of less than a tenth of a second - but getting a bit of the next sound into the audio of the previous one will spoil the effect.
Secondarily, audio is non-blocking. So multiple calls to audio.play() will result in the browser trying to play bits and pieces simultaneously... badly. So I created a sort of queue to start the next sound when the first one has ended.
Finally, I realized that we may need a pause when attempting to play a series of sounds. I added pause parameter to the queue management code to pause between sounds if required.
So here it is. Here is the basic usage I was building for:
First, we set up a constructor of Track, the class that will be playing the audio. The comments about included iOS magic are Remy's.
Now the fun part: play, pause an manage queue:
I hope, this basically makes sense. Note the use of this.edge: both when we are trying to play and pause.
0 comments:
Post a Comment