the original order back to those original order conditions
Sometimes alphabetical and numerical “orders” mean something too, but not, in our view, for these “YouTube Video List of Play” scenarios.
Today, we’ve also shored up the dynamic oncontextmenu (and now ondblclick) event textbox functionalities via the external Javascript’s …
if (document.getElementById('rshuffle')) {
var ohis=document.getElementById('rshuffle').outerHTML.split('placeholder=')[0];
var evstuff=" if (event.target.value == '') { event.target.value=inpvals[curvalis] + '|' + inpttls[curvalis]; event.target.blur(); } ";
if (('' + document.getElementById('rshuffle').placeholder).indexOf(' click ') == -1) {
//alert(675);
betterhc=capitfl(document.URL.split('.htm')[0].split('/')[eval(-1 + document.URL.split('.htm')[0].split('/').length)].replace(/\_/g,' '));
if (navigator.userAgent.match(/Android|BlackBerry|iPhone|iPad|iPod|Opera Mini|IEMobile/i)) {
document.getElementById('rshuffle').placeholder+=' ... double click for ' + decodeURIComponent(inpttls[curvalis]);
ohis=document.getElementById('rshuffle').outerHTML.split('placeholder=')[0];
if (1 == 1) {
document.getElementById('wshuffle').innerHTML=document.getElementById('wshuffle').innerHTML.replace(ohis, ohis.replace('<input ', '<input oncontextment="' + evstuff + '" ondblclick="' + evstuff + '" '));
} else {
document.getElementById('rshuffle').ondblclick=function(event) { if (event.target.value == '') { event.target.value=inpvals[curvalis] + '|' + inpttls[curvalis]; event.target.blur(); } };
}
} else {
//document.getElementById('rshuffle').placeholder+=' ... right click (or two finger gesture) for ' + decodeURIComponent(inpttls[curvalis]);
document.getElementById('rshuffle').placeholder+=' ... double click (or two finger gesture) for ' + decodeURIComponent(inpttls[curvalis]);
ohis=document.getElementById('rshuffle').outerHTML.split('placeholder=')[0];
if (1 == 1) {
document.getElementById('wshuffle').innerHTML=document.getElementById('wshuffle').innerHTML.replace(ohis, ohis.replace('<input ', '<input oncontextment="' + evstuff + '" ondblclick="' + evstuff + '" '));
} else {
//alert(2675);
document.getElementById('rshuffle').addEventListener('contextmenu', function(event) { if (event.target.value == '') { event.target.value=inpvals[curvalis] + '|' + inpttls[curvalis]; event.target.blur(); } });
//alert(3675);
document.getElementById('rshuffle').addEventListener('dblclick', function(event){ alert(786); if (event.target.value == '') { event.target.value=inpvals[curvalis] + '|' + inpttls[curvalis]; event.target.blur(); } });
//alert(4675);
}
}
document.getElementById('rshuffle').style.backgroundColor='#f9f9f9';
if (document.getElementById('dshuffle')) {
document.getElementById('dshuffle').innerHTML+='<style> select { text-shadow: -1px 1px 1px #e52dff; } #rshuffle::placeholder { text-shadow: -1px 1px 1px #e52dff; } #rshuffle:-ms-input-placeholder { text-shadow: -1px 1px 1px #e52dff; } #rshuffle::-ms-input-placeholder { text-shadow: -1px 1px 1px #e52dff; } </style>';
}
} else {
var wascur=curvalis;
curvalis++;
if (curvalis >= eval('' + inpvals.length)) { curvalis=0; }
if (navigator.userAgent.match(/Android|BlackBerry|iPhone|iPad|iPod|Opera Mini|IEMobile/i)) {
if (curvalis != wascur && document.getElementById('rshuffle').placeholder.indexOf(' ... double click for ') != -1) {
document.getElementById('rshuffle').placeholder=document.getElementById('rshuffle').placeholder.split(' ... double click for ')[0] + ' ... double click for ' + decodeURIComponent(inpttls[curvalis]);
}
} else {
if (curvalis != wascur && document.getElementById('rshuffle').placeholder.indexOf(' click (or two finger gesture) for ') != -1) {
//document.getElementById('rshuffle').placeholder=document.getElementById('rshuffle').placeholder.split(' ... right click (or two finger gesture) for ')[0] + ' ... right click (or two finger gesture) for ' + decodeURIComponent(inpttls[curvalis]);
document.getElementById('rshuffle').placeholder=document.getElementById('rshuffle').placeholder.split(' ... double click (or two finger gesture) for ')[0] + ' ... double click (or two finger gesture) for ' + decodeURIComponent(inpttls[curvalis]);
}
}
}
}
… making their definition part of the HTML, which we’ve noticed in a couple of projects now, creates less flakiness using these means to populate several other “Brady Bunch” YouTube video 3×3 iframe table scenarios.
YouTube Video List of Play External Javascript Supervisor Tutorial
Revisiting web application projects, and incorporating new functionality, we get best joy when a suite of web applications like our “Brady Bunch YouTube Music” peer to peer suite can all be attended to via changes to just the one external Javascript they all call. In a word, this is “modularization” at play.
… with us still differentiating the workflow logic between non-mobile and mobile “YouTube Video List of Plays”, though it was tempting to adopt yesterday’s mobile logic for non-mobile! Maybe we will into the future!
… it took us all of two days to not need to do this, but today’s “peer to peer suite” of interfacings involve …
YouTube videos of unknown duration ...
… that is a “whole new ball game” for our coding. We either expend what we deemed as too much effort ahead of our “YouTube List of Plays” working out durations, or tweak the “grandchild” for more flexibility should durations not be known, and that is where the “mobile” work from two days ago outplays the “non-mobile” work, because delays regarding …
the “non-mobile” logic works via setTimeout based delays … are much dumberless user responsive than …
the “mobile” logic revolves around a YouTube API event logic interception point at the YouTube API determined “end of video play” overriding (what we start with as) an overinflated duration guesstimate applied
… needing the new external Javascript function …
var thevidtocheck=0, vidstocheck=[], frombizzo=[], tobizzo=[];
var alookmade=false, wowowo=null, wowowourl='';
… in our changedyoutube_brady_bunch.js being used in our inhouse Disco web application “peer to peer suite” web application example where a 🌓 -> 🌗 emoji button accesses this new sequential play possibility.
Still and all, when we know a duration on non-mobile we’re going to keep the setTimeout arrangement coding ready. Down the line, for some mobile scenario, it may be needed the other way around … who knows?!
Some of the motivation for today’s addition of Shuffle logic to the “YouTube Video List of Plays” work we last mentioned with Spliced Audio/Video YouTube Mobile Recall Tutorial of recent times is to do with the huge percentage of time spent, in such a project, running the same test conditions again and again and again, and we were getting a bit bored with the same order of songs each time, for hundreds of tests.
We thought it would be pretty easy to do, but strangely, there were timing issues making it more complicated than we thought it would be for a user who clicks the new Shuffle checkbox that appears when a “YouTube Video List of Plays” is detected. For the non-mobile side of the work, we can point to a new Javascript function that helps with most of the work …
… with us still differentiating the workflow logic between non-mobile and mobile “YouTube Video List of Plays”, though it was tempting to adopt yesterday’s mobile logic for non-mobile! Maybe we will into the future!
Spliced Audio/Video YouTube Mobile Recall Tutorial
Of course we want to be like Spotify, with the one tap meaning peace for long periods playing music, as it is with YouTube playlists on mobile platforms. God knows, we’ve winged enough about this mobile requirement to have a real user tap precede media play. And so, in this context, yesterday’s Spliced Audio/Video YouTube Recall Tutorial‘s non-mobile “playlist” style YouTube video playing was a “walk in the park”.
… we finally arrived at a happy home for our interventional Javascript … as well as a first play intervention. As important as “intervention” is, this intervention needed some “preparatory intervention” that little bit earlier on in time …
Methinks it’s doubtful this avoidance of “all but the first” mobile tap “ask” can be avoided not using the YouTube API. For mobile, with these inhouse …
… we cut out “parent” involvement in any “decision making” sense, except as the place where checkboxes determine what is played, otherwise we know, you risk needing to re-rely on user instigated taps after the first when playing mobile “playlist” video lists sequentially.
… organize themselves with a hashtag based calling URL logic when multiple YouTube videos are being asked to play sequentially. Here is new Javascript featuring in the inhouse YouTube API Video Player via lhchk(”); new call at the document.body onload event …
… we find useful, around here, to describe web page design issues and solutions. They both come into play a lot, at least for us. Mind you, our thoughts may have pared down complication thinking to arrive at these two concept foci.
Today, we’re rewriting the existant Overlay logic from 2016 into our inhouse Spliced Audio/Video/Image web application, to cater for those data URL additional input data functionalities we’ve added with our 2025 revisit.
… template HTML Javascript variables a style=”display:none;” … and curiously, we were going to use style=”” but this actually has more meaning than you’d think, as it seems to create a set of default styling decisions?! … and then at document.body onload event we have …
Another day, another deliberation about delimitation! Yes, as a programmer, of the mere mortal variety, and you ask a bit of your engaged users, you’ll not have much chance of …
changing hardware
changing firmware
changing environment … very much
… achieving your ends … ngah ha ha!
But you have got delimitation on your side.
Down @ …
Wait for ! to finish first …
We do this type of work a lot, and recommend …
get firm idea in the mind of what you want to achieve (yesterday and today, it being the “midstream” ability to change YouTube video ID references using our inhouse YouTube video interfacer) … including …
what might happen into the future … and allow for flexibility should a new piece of functionality happen into the future … and …
the work is often behind the scenes, and the user not interested in odd arrangements, so flag with title hovering (non-mobile) and/or placeholder (non-mobile or mobile) textbox flagging of your preferred usage … though …
behind the scenes, where possible, allow for more flexibility, regarding which delimiter makes the functionality happen, should the user forget, and flounder
generally speaking textbox static HTML like …
<input placeholder="Can | separate time to next YouTube video ID (use ; for just audio)" style="width:400px;" onblur="checkval(this);" type="text" onmouseover="toms(this);" id="i1" name="i1" value="" title="0:00:00">
…
but reworked for the first such textbox (and no, using CSS for this is not recommended) using Javascript …
setTimeout(function(){ if (document.getElementById('i0')) { document.getElementById('i0').placeholder='Can ; separate time to flag Just Audio'; } }, 4000);
… to help inform the user of what is possible … ngah ha ha
No | … wait … ~!@#$%%%#@#$^& … ouch
Proof of the delimitation pudding here, we think, is that we could move off yesterday’s …
blast from the past version
… working methodically through the issues, making the current version better too. You will find web browser Web Inspectors invaluable here. We used a “suck it and see” approach to making changes and saw where the Web Inspector took us regarding errors we caused, and fixed those, so that we could apply the better logic to the most recent version of our inhouse YouTube video interfacer.
text to send to Google Translate … is bolstered today via a new modus operandum …
via YouTube video ID 11 character code play a YouTube video
… feeding into our YouTube API work, but resurrecting an inhouse “blast from the past version” close to being okay, on non-mobile, changing YouTube video IDs “midstream” and keeping the video playing work all within the one webpage window. Am sure we had ideas like this in mind using the “Splicing” word in the web application title. We’ve always been a bit obsessed with those train announcements piecing a message together, especially when it comes to “number words” via a combination of audio media snippets.
Also, today, with limited practical success, we’re trying to allow a user to loop through their media list, repeating the playing. There are that many ways we can get interrupted achieving this, it’s not funny … really … but we’re not getting upset because the user can go back to the source window and reclick buttons of their choice to keep the good times rolling … ah, MacArthur Park … again!
If you’ve been following yesterday’s Spliced Audio/Video Styling Tutorial Spliced Media synchronized play project of recent times, you’ll probably guess what our “project word” would be, that being …
duration
… as a “measure” of importance to help with the sequential play of media, out of …
audio
video
image
… choices of “media category” we’re offering in this project. But, what “duration” applies to image choice above? Well, we just hardcode 5 seconds for …
(non-animated) JPEG or PNG or GIF … but …
animated GIFs have a one cycle through duration …
… that we want to help calculate for the user and show in that relevant “end of” timing textbox. Luckily, we’ve researched this in the past, but every scenario is that bit different, we find, and so here is the Javascript for what we’re using …
function prefetch(whatgifmaybe) { // thanks to https://stackoverflow.com/questions/69564118/how-to-get-xxduration-of-gif-image-in-javascript#:~:text=Mainly%20use%20parseGIF()%20%2C%20then,xxduration%20of%20a%20GIF%20image.
if ((whatgifmaybe.toLowerCase().trim().split('#')[0].replace('/gif;', '/gif?;') + '?').indexOf('.gif?') != -1 && lastgifpreq != whatgifmaybe.split('?')[0]) {
lastgifpreq=whatgifmaybe.split('?')[0];
if (whatgifmaybe.indexOf('/tmp/') != -1) {
lastgifurl='/tmp/' + whatgifmaybe.split('/tmp/')[1];
} else {
lastgifurl='';
}
document.body.style.cursor='progress';
whatgifmaybe=whatgifmaybe.split('?')[0];
//alert('whatgifmaybe=' + whatgifmaybe);
Today, we start concertinaing multiple image asks (and yet show image order), using our favourite “reveal” tool, the details/summary HTML element dynamic duo, so that there are less images disappearing “below the fold” happening this way in …
We have “bad hair days”, but that doesn’t stop us seeking “styling days”. Yes, we often separate CSS styling into an issue that is addressed only if we deem the project warrants it, and we’ve decided this latest Spliced Audio/Video web application project is worth it. This means that further to yesterday’s Spliced Audio/Video Browsing Data URL Tutorial we have some new CSS styling in today’s work …
… into (in the second phase of it’s existence within an execution run) a …
linear gradient inspired progress bar … the secret to the “hard stops” we got great new advice from this link, thanks … to come up with dynamic CSS styling via Javascript …
function dstyleit(what) {
var oney=what.split('yellow ')[1].split('%')[0] + '%,';
what=what.replace(oney,oney + oney);
document.getElementById('dstyle').innerHTML+='<style> #subis { background: ' + what + ' } </style>';
return what;
}
browsing for local files off the client operating system environment … and …
a genericized “guise” of web server media files … and …
client File API blob or canvas content media representations
… a point of commonality, and as far as we are concerned, we are always looking to these days, whenever we can, data size permitting, to do away with static web server references, open to so many mixed content and privilege issues.
Perhaps, that is the major reason, these days, we’ve taken more and more to hashtag based means of communication via “a” “mailto:” (email) or “sms:” (SMS) links.
Today proved to us that that, albeit most flexible of all, clientside hashtag approach has its data limitations, that PHP serverside form method=POST does better at, as far as accomodating large amounts of data.
And so, yesterday, with our Splicing Audio or Video inhouse web application, turning more towards data URLs to solve issues, we started the day …
trying hashtag clientside based navigations … but ran into data size issues, so …
introduced into our Splicing Audio or Video inhouse web application, for the first time, PHP serverside involvement …
… that made things start working for us better getting the synchronized play of our user entered audio or video media items performing better.
And so, further to yesterday’s Spliced Audio/Video Data URL Tutorial we added browsing for local media files as a new option for user input, by calling …
function ahere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio here');
//alert('audio here ' + document.getElementById('anaudio').duration);
}
}
function alere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio Here');
}
}
function vhere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video here');
//alert('video here ' + document.getElementById('anvideo').duration);
}
}
function vlere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video Here');
}
}
This works if you can fill in the src attribute of the relevant subelement source element with a suitable data URL (we used the changeddo_away_with_the_boring_bits.php helping PHP to derive). From there, in that event logic an [element].duration is there to help fill out those end of play textboxes in a more automated fashion for the user that wants to use this new functionality, as they fill out the Spliced Media form presented.
Today we’ve written a third draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Spliced Audio/Video/Image Overlay Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
… all like yesterday, but this time we allow you to “seek” or position yourself within the audio and/or video media. We still all “fit” this into GET parameter usage. Are you thinking we are a tad lazy with this approach? Well, perhaps a little, but it also means you can do this job just using clientside HTML and Javascript, without having to involve any serverside code like PHP, and in this day and age, people are much keener on this “just clientside” or “just client looking, plus, perhaps, Javascript serverside code” (ala Node.js) or perhaps “Javascript clientside client code, plus Ajax methodologies”. In any case, it does simplify design to not have to involve a serverside language like PHP … but please don’t think we do not encourage you to learn a serverside language like PHP.
While we are at it here, we continue to think about the mobile device unfriendliness with our current web application, it being, these days, that the setting of the autoplay property for a media object is frowned upon regarding these mobile devices … for reasons of “runaway” unknown charge issues as you can read at this useful link … thanks … and where they quote from Apple …
“Apple has made the decision to disable the automatic playing of video on iOS devices, through both script and attribute implementations.
In Safari, on iOS (for all devices, including iPad), where the user may be on a cellular network and be charged per data unit, preload and auto-play are disabled. No data is loaded until the user initiates it.” – Apple documentation.
A link we’d like to thank regarding the new “seek” or media positioning functionality is this one … thanks.
Also, today, for that sense of symmetry, we start to create the Audio objects from now on using …
document.createElement("AUDIO");
… as this acts the same as new Audio() to the best of our testing.
For your own testing purposes, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. For today’s cake “prepared before the program” we’ve again channelled the GoToMeeting Primer Tutorial which had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, but only seconds 23 through to 47 of the video should play, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic, and hope we can improve mobile device functionality.
Today we’ve written a second draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Splicing Audio Primer Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
The major new change here, apart from the ability to play two media files at once in our synchronized (or “overlayed”) way, is the additional functionality for Video, and we proceeded thinking there’d be an Javascript DOM OOPy method like … var xv = new Video(); … to allow for this, but found out from this useful link … thanks … that an alternative approach for Video object creation, on the fly, is …
var xv = document.createElement("VIDEO");
… curiously. And it took us a while to tweak to the idea that to have a “display home” for the video on the webpage we needed to …
document.body.appendChild(xv);
… which means you need to take care of any HTML form data already filled in, that isn’t that form’s default, when you effectively “refresh” the webpage like this. Essentially though, media on the fly is a modern approach possible fairly easily with just clientside code. Cute, huh?!
Of course, what we still miss here, is the upload from a local place onto the web server, here at RJM Programming, capability, which we may consider in future, and that some of those other synchronization of media themed blog postings of the past, which you may want to read more, for this type of approach.
In the meantime, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. We’ve thought of this one. Do you remember how the GoToMeeting Primer Tutorial had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic.
Today we’ve written a first draft of an HTML and Javascript web application that splices up to nine bits of audio input together that can take either of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
Do you remember, perhaps, when we did a series of blog posts regarding the YouTube API, that finished, so far, with YouTube API Iframe Synchronicity Resizing Tutorial? Well, a lot of what we do today is doing similar sorts of functionalities but just for Audio objects in HTML5. For help on this we’d like to thank this great link. So rather than have HTML audio elements in our HTML, as we first shaped to do, we’ve taken the great advice from this link, and gone all Javascript DOM OOPy on the task, to splice audio media together.
There were three thought patterns going on here for me.
The first was a simulation of those Sydney train public announcements where the timbre of the voice differs a bit between when they say “Platform” and the “6” (or whatever platform it is) that follows. This is pretty obviously computer audio “bits” strung together … and wanted to get somewhere towards that capability.
The second one relates to presentation ideas following up on that “onmouseover” Siri audio enhanced presentation we did at Apple iOS Siri Audio Commentary Tutorial. Well, we think we can do something related to that here, and we’ve prepared this cake audio presentation here, for us, in advance … really, there’s no need for thanks.
The third concerns our eternal media file synchronization quests here at this blog that you may find of interest we hope, here.
Also of interest over time has been the Google Translate Text to Speech functionality that used to be very open, and we now only use around here in an interactive “user clicks” way … but we still use it, because it is very useful, so, thanks. But trying to get this method working for “Platform” and “6” without a yawning gap in between ruins the spontaneity and fun somehow, but there’s nothing stopping you making your own audio files yourself as we did in that Siri tutorial called Apple iOS Siri Audio Commentary Tutorial and take the HTML and Javascript code you could call splice_audio.html from today, and go and make your own web application? Now, is there? Huh?
YouTube Video List of Play External Javascript Supervisor Tutorial
Revisiting web application projects, and incorporating new functionality, we get best joy when a suite of web applications like our “Brady Bunch YouTube Music” peer to peer suite can all be attended to via changes to just the one external Javascript they all call. In a word, this is “modularization” at play.
… with us still differentiating the workflow logic between non-mobile and mobile “YouTube Video List of Plays”, though it was tempting to adopt yesterday’s mobile logic for non-mobile! Maybe we will into the future!
… it took us all of two days to not need to do this, but today’s “peer to peer suite” of interfacings involve …
YouTube videos of unknown duration ...
… that is a “whole new ball game” for our coding. We either expend what we deemed as too much effort ahead of our “YouTube List of Plays” working out durations, or tweak the “grandchild” for more flexibility should durations not be known, and that is where the “mobile” work from two days ago outplays the “non-mobile” work, because delays regarding …
the “non-mobile” logic works via setTimeout based delays … are much dumberless user responsive than …
the “mobile” logic revolves around a YouTube API event logic interception point at the YouTube API determined “end of video play” overriding (what we start with as) an overinflated duration guesstimate applied
… needing the new external Javascript function …
var thevidtocheck=0, vidstocheck=[], frombizzo=[], tobizzo=[];
var alookmade=false, wowowo=null, wowowourl='';
… in our changedyoutube_brady_bunch.js being used in our inhouse Disco web application “peer to peer suite” web application example where a 🌓 -> 🌗 emoji button accesses this new sequential play possibility.
Still and all, when we know a duration on non-mobile we’re going to keep the setTimeout arrangement coding ready. Down the line, for some mobile scenario, it may be needed the other way around … who knows?!
Some of the motivation for today’s addition of Shuffle logic to the “YouTube Video List of Plays” work we last mentioned with Spliced Audio/Video YouTube Mobile Recall Tutorial of recent times is to do with the huge percentage of time spent, in such a project, running the same test conditions again and again and again, and we were getting a bit bored with the same order of songs each time, for hundreds of tests.
We thought it would be pretty easy to do, but strangely, there were timing issues making it more complicated than we thought it would be for a user who clicks the new Shuffle checkbox that appears when a “YouTube Video List of Plays” is detected. For the non-mobile side of the work, we can point to a new Javascript function that helps with most of the work …
… with us still differentiating the workflow logic between non-mobile and mobile “YouTube Video List of Plays”, though it was tempting to adopt yesterday’s mobile logic for non-mobile! Maybe we will into the future!
Spliced Audio/Video YouTube Mobile Recall Tutorial
Of course we want to be like Spotify, with the one tap meaning peace for long periods playing music, as it is with YouTube playlists on mobile platforms. God knows, we’ve winged enough about this mobile requirement to have a real user tap precede media play. And so, in this context, yesterday’s Spliced Audio/Video YouTube Recall Tutorial‘s non-mobile “playlist” style YouTube video playing was a “walk in the park”.
… we finally arrived at a happy home for our interventional Javascript … as well as a first play intervention. As important as “intervention” is, this intervention needed some “preparatory intervention” that little bit earlier on in time …
Methinks it’s doubtful this avoidance of “all but the first” mobile tap “ask” can be avoided not using the YouTube API. For mobile, with these inhouse …
… we cut out “parent” involvement in any “decision making” sense, except as the place where checkboxes determine what is played, otherwise we know, you risk needing to re-rely on user instigated taps after the first when playing mobile “playlist” video lists sequentially.
… organize themselves with a hashtag based calling URL logic when multiple YouTube videos are being asked to play sequentially. Here is new Javascript featuring in the inhouse YouTube API Video Player via lhchk(”); new call at the document.body onload event …
… we find useful, around here, to describe web page design issues and solutions. They both come into play a lot, at least for us. Mind you, our thoughts may have pared down complication thinking to arrive at these two concept foci.
Today, we’re rewriting the existant Overlay logic from 2016 into our inhouse Spliced Audio/Video/Image web application, to cater for those data URL additional input data functionalities we’ve added with our 2025 revisit.
… template HTML Javascript variables a style=”display:none;” … and curiously, we were going to use style=”” but this actually has more meaning than you’d think, as it seems to create a set of default styling decisions?! … and then at document.body onload event we have …
Another day, another deliberation about delimitation! Yes, as a programmer, of the mere mortal variety, and you ask a bit of your engaged users, you’ll not have much chance of …
changing hardware
changing firmware
changing environment … very much
… achieving your ends … ngah ha ha!
But you have got delimitation on your side.
Down @ …
Wait for ! to finish first …
We do this type of work a lot, and recommend …
get firm idea in the mind of what you want to achieve (yesterday and today, it being the “midstream” ability to change YouTube video ID references using our inhouse YouTube video interfacer) … including …
what might happen into the future … and allow for flexibility should a new piece of functionality happen into the future … and …
the work is often behind the scenes, and the user not interested in odd arrangements, so flag with title hovering (non-mobile) and/or placeholder (non-mobile or mobile) textbox flagging of your preferred usage … though …
behind the scenes, where possible, allow for more flexibility, regarding which delimiter makes the functionality happen, should the user forget, and flounder
generally speaking textbox static HTML like …
<input placeholder="Can | separate time to next YouTube video ID (use ; for just audio)" style="width:400px;" onblur="checkval(this);" type="text" onmouseover="toms(this);" id="i1" name="i1" value="" title="0:00:00">
…
but reworked for the first such textbox (and no, using CSS for this is not recommended) using Javascript …
setTimeout(function(){ if (document.getElementById('i0')) { document.getElementById('i0').placeholder='Can ; separate time to flag Just Audio'; } }, 4000);
… to help inform the user of what is possible … ngah ha ha
No | … wait … ~!@#$%%%#@#$^& … ouch
Proof of the delimitation pudding here, we think, is that we could move off yesterday’s …
blast from the past version
… working methodically through the issues, making the current version better too. You will find web browser Web Inspectors invaluable here. We used a “suck it and see” approach to making changes and saw where the Web Inspector took us regarding errors we caused, and fixed those, so that we could apply the better logic to the most recent version of our inhouse YouTube video interfacer.
text to send to Google Translate … is bolstered today via a new modus operandum …
via YouTube video ID 11 character code play a YouTube video
… feeding into our YouTube API work, but resurrecting an inhouse “blast from the past version” close to being okay, on non-mobile, changing YouTube video IDs “midstream” and keeping the video playing work all within the one webpage window. Am sure we had ideas like this in mind using the “Splicing” word in the web application title. We’ve always been a bit obsessed with those train announcements piecing a message together, especially when it comes to “number words” via a combination of audio media snippets.
Also, today, with limited practical success, we’re trying to allow a user to loop through their media list, repeating the playing. There are that many ways we can get interrupted achieving this, it’s not funny … really … but we’re not getting upset because the user can go back to the source window and reclick buttons of their choice to keep the good times rolling … ah, MacArthur Park … again!
If you’ve been following yesterday’s Spliced Audio/Video Styling Tutorial Spliced Media synchronized play project of recent times, you’ll probably guess what our “project word” would be, that being …
duration
… as a “measure” of importance to help with the sequential play of media, out of …
audio
video
image
… choices of “media category” we’re offering in this project. But, what “duration” applies to image choice above? Well, we just hardcode 5 seconds for …
(non-animated) JPEG or PNG or GIF … but …
animated GIFs have a one cycle through duration …
… that we want to help calculate for the user and show in that relevant “end of” timing textbox. Luckily, we’ve researched this in the past, but every scenario is that bit different, we find, and so here is the Javascript for what we’re using …
function prefetch(whatgifmaybe) { // thanks to https://stackoverflow.com/questions/69564118/how-to-get-xxduration-of-gif-image-in-javascript#:~:text=Mainly%20use%20parseGIF()%20%2C%20then,xxduration%20of%20a%20GIF%20image.
if ((whatgifmaybe.toLowerCase().trim().split('#')[0].replace('/gif;', '/gif?;') + '?').indexOf('.gif?') != -1 && lastgifpreq != whatgifmaybe.split('?')[0]) {
lastgifpreq=whatgifmaybe.split('?')[0];
if (whatgifmaybe.indexOf('/tmp/') != -1) {
lastgifurl='/tmp/' + whatgifmaybe.split('/tmp/')[1];
} else {
lastgifurl='';
}
document.body.style.cursor='progress';
whatgifmaybe=whatgifmaybe.split('?')[0];
//alert('whatgifmaybe=' + whatgifmaybe);
Today, we start concertinaing multiple image asks (and yet show image order), using our favourite “reveal” tool, the details/summary HTML element dynamic duo, so that there are less images disappearing “below the fold” happening this way in …
We have “bad hair days”, but that doesn’t stop us seeking “styling days”. Yes, we often separate CSS styling into an issue that is addressed only if we deem the project warrants it, and we’ve decided this latest Spliced Audio/Video web application project is worth it. This means that further to yesterday’s Spliced Audio/Video Browsing Data URL Tutorial we have some new CSS styling in today’s work …
… into (in the second phase of it’s existence within an execution run) a …
linear gradient inspired progress bar … the secret to the “hard stops” we got great new advice from this link, thanks … to come up with dynamic CSS styling via Javascript …
function dstyleit(what) {
var oney=what.split('yellow ')[1].split('%')[0] + '%,';
what=what.replace(oney,oney + oney);
document.getElementById('dstyle').innerHTML+='<style> #subis { background: ' + what + ' } </style>';
return what;
}
browsing for local files off the client operating system environment … and …
a genericized “guise” of web server media files … and …
client File API blob or canvas content media representations
… a point of commonality, and as far as we are concerned, we are always looking to these days, whenever we can, data size permitting, to do away with static web server references, open to so many mixed content and privilege issues.
Perhaps, that is the major reason, these days, we’ve taken more and more to hashtag based means of communication via “a” “mailto:” (email) or “sms:” (SMS) links.
Today proved to us that that, albeit most flexible of all, clientside hashtag approach has its data limitations, that PHP serverside form method=POST does better at, as far as accomodating large amounts of data.
And so, yesterday, with our Splicing Audio or Video inhouse web application, turning more towards data URLs to solve issues, we started the day …
trying hashtag clientside based navigations … but ran into data size issues, so …
introduced into our Splicing Audio or Video inhouse web application, for the first time, PHP serverside involvement …
… that made things start working for us better getting the synchronized play of our user entered audio or video media items performing better.
And so, further to yesterday’s Spliced Audio/Video Data URL Tutorial we added browsing for local media files as a new option for user input, by calling …
function ahere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio here');
//alert('audio here ' + document.getElementById('anaudio').duration);
}
}
function alere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio Here');
}
}
function vhere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video here');
//alert('video here ' + document.getElementById('anvideo').duration);
}
}
function vlere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video Here');
}
}
This works if you can fill in the src attribute of the relevant subelement source element with a suitable data URL (we used the changeddo_away_with_the_boring_bits.php helping PHP to derive). From there, in that event logic an [element].duration is there to help fill out those end of play textboxes in a more automated fashion for the user that wants to use this new functionality, as they fill out the Spliced Media form presented.
Today we’ve written a third draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Spliced Audio/Video/Image Overlay Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
… all like yesterday, but this time we allow you to “seek” or position yourself within the audio and/or video media. We still all “fit” this into GET parameter usage. Are you thinking we are a tad lazy with this approach? Well, perhaps a little, but it also means you can do this job just using clientside HTML and Javascript, without having to involve any serverside code like PHP, and in this day and age, people are much keener on this “just clientside” or “just client looking, plus, perhaps, Javascript serverside code” (ala Node.js) or perhaps “Javascript clientside client code, plus Ajax methodologies”. In any case, it does simplify design to not have to involve a serverside language like PHP … but please don’t think we do not encourage you to learn a serverside language like PHP.
While we are at it here, we continue to think about the mobile device unfriendliness with our current web application, it being, these days, that the setting of the autoplay property for a media object is frowned upon regarding these mobile devices … for reasons of “runaway” unknown charge issues as you can read at this useful link … thanks … and where they quote from Apple …
“Apple has made the decision to disable the automatic playing of video on iOS devices, through both script and attribute implementations.
In Safari, on iOS (for all devices, including iPad), where the user may be on a cellular network and be charged per data unit, preload and auto-play are disabled. No data is loaded until the user initiates it.” – Apple documentation.
A link we’d like to thank regarding the new “seek” or media positioning functionality is this one … thanks.
Also, today, for that sense of symmetry, we start to create the Audio objects from now on using …
document.createElement("AUDIO");
… as this acts the same as new Audio() to the best of our testing.
For your own testing purposes, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. For today’s cake “prepared before the program” we’ve again channelled the GoToMeeting Primer Tutorial which had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, but only seconds 23 through to 47 of the video should play, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic, and hope we can improve mobile device functionality.
Today we’ve written a second draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Splicing Audio Primer Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
The major new change here, apart from the ability to play two media files at once in our synchronized (or “overlayed”) way, is the additional functionality for Video, and we proceeded thinking there’d be an Javascript DOM OOPy method like … var xv = new Video(); … to allow for this, but found out from this useful link … thanks … that an alternative approach for Video object creation, on the fly, is …
var xv = document.createElement("VIDEO");
… curiously. And it took us a while to tweak to the idea that to have a “display home” for the video on the webpage we needed to …
document.body.appendChild(xv);
… which means you need to take care of any HTML form data already filled in, that isn’t that form’s default, when you effectively “refresh” the webpage like this. Essentially though, media on the fly is a modern approach possible fairly easily with just clientside code. Cute, huh?!
Of course, what we still miss here, is the upload from a local place onto the web server, here at RJM Programming, capability, which we may consider in future, and that some of those other synchronization of media themed blog postings of the past, which you may want to read more, for this type of approach.
In the meantime, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. We’ve thought of this one. Do you remember how the GoToMeeting Primer Tutorial had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic.
Today we’ve written a first draft of an HTML and Javascript web application that splices up to nine bits of audio input together that can take either of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
Do you remember, perhaps, when we did a series of blog posts regarding the YouTube API, that finished, so far, with YouTube API Iframe Synchronicity Resizing Tutorial? Well, a lot of what we do today is doing similar sorts of functionalities but just for Audio objects in HTML5. For help on this we’d like to thank this great link. So rather than have HTML audio elements in our HTML, as we first shaped to do, we’ve taken the great advice from this link, and gone all Javascript DOM OOPy on the task, to splice audio media together.
There were three thought patterns going on here for me.
The first was a simulation of those Sydney train public announcements where the timbre of the voice differs a bit between when they say “Platform” and the “6” (or whatever platform it is) that follows. This is pretty obviously computer audio “bits” strung together … and wanted to get somewhere towards that capability.
The second one relates to presentation ideas following up on that “onmouseover” Siri audio enhanced presentation we did at Apple iOS Siri Audio Commentary Tutorial. Well, we think we can do something related to that here, and we’ve prepared this cake audio presentation here, for us, in advance … really, there’s no need for thanks.
The third concerns our eternal media file synchronization quests here at this blog that you may find of interest we hope, here.
Also of interest over time has been the Google Translate Text to Speech functionality that used to be very open, and we now only use around here in an interactive “user clicks” way … but we still use it, because it is very useful, so, thanks. But trying to get this method working for “Platform” and “6” without a yawning gap in between ruins the spontaneity and fun somehow, but there’s nothing stopping you making your own audio files yourself as we did in that Siri tutorial called Apple iOS Siri Audio Commentary Tutorial and take the HTML and Javascript code you could call splice_audio.html from today, and go and make your own web application? Now, is there? Huh?
Some of the motivation for today’s addition of Shuffle logic to the “YouTube Video List of Plays” work we last mentioned with Spliced Audio/Video YouTube Mobile Recall Tutorial of recent times is to do with the huge percentage of time spent, in such a project, running the same test conditions again and again and again, and we were getting a bit bored with the same order of songs each time, for hundreds of tests.
We thought it would be pretty easy to do, but strangely, there were timing issues making it more complicated than we thought it would be for a user who clicks the new Shuffle checkbox that appears when a “YouTube Video List of Plays” is detected. For the non-mobile side of the work, we can point to a new Javascript function that helps with most of the work …
… with us still differentiating the workflow logic between non-mobile and mobile “YouTube Video List of Plays”, though it was tempting to adopt yesterday’s mobile logic for non-mobile! Maybe we will into the future!
Spliced Audio/Video YouTube Mobile Recall Tutorial
Of course we want to be like Spotify, with the one tap meaning peace for long periods playing music, as it is with YouTube playlists on mobile platforms. God knows, we’ve winged enough about this mobile requirement to have a real user tap precede media play. And so, in this context, yesterday’s Spliced Audio/Video YouTube Recall Tutorial‘s non-mobile “playlist” style YouTube video playing was a “walk in the park”.
… we finally arrived at a happy home for our interventional Javascript … as well as a first play intervention. As important as “intervention” is, this intervention needed some “preparatory intervention” that little bit earlier on in time …
Methinks it’s doubtful this avoidance of “all but the first” mobile tap “ask” can be avoided not using the YouTube API. For mobile, with these inhouse …
… we cut out “parent” involvement in any “decision making” sense, except as the place where checkboxes determine what is played, otherwise we know, you risk needing to re-rely on user instigated taps after the first when playing mobile “playlist” video lists sequentially.
… organize themselves with a hashtag based calling URL logic when multiple YouTube videos are being asked to play sequentially. Here is new Javascript featuring in the inhouse YouTube API Video Player via lhchk(”); new call at the document.body onload event …
… we find useful, around here, to describe web page design issues and solutions. They both come into play a lot, at least for us. Mind you, our thoughts may have pared down complication thinking to arrive at these two concept foci.
Today, we’re rewriting the existant Overlay logic from 2016 into our inhouse Spliced Audio/Video/Image web application, to cater for those data URL additional input data functionalities we’ve added with our 2025 revisit.
… template HTML Javascript variables a style=”display:none;” … and curiously, we were going to use style=”” but this actually has more meaning than you’d think, as it seems to create a set of default styling decisions?! … and then at document.body onload event we have …
Another day, another deliberation about delimitation! Yes, as a programmer, of the mere mortal variety, and you ask a bit of your engaged users, you’ll not have much chance of …
changing hardware
changing firmware
changing environment … very much
… achieving your ends … ngah ha ha!
But you have got delimitation on your side.
Down @ …
Wait for ! to finish first …
We do this type of work a lot, and recommend …
get firm idea in the mind of what you want to achieve (yesterday and today, it being the “midstream” ability to change YouTube video ID references using our inhouse YouTube video interfacer) … including …
what might happen into the future … and allow for flexibility should a new piece of functionality happen into the future … and …
the work is often behind the scenes, and the user not interested in odd arrangements, so flag with title hovering (non-mobile) and/or placeholder (non-mobile or mobile) textbox flagging of your preferred usage … though …
behind the scenes, where possible, allow for more flexibility, regarding which delimiter makes the functionality happen, should the user forget, and flounder
generally speaking textbox static HTML like …
<input placeholder="Can | separate time to next YouTube video ID (use ; for just audio)" style="width:400px;" onblur="checkval(this);" type="text" onmouseover="toms(this);" id="i1" name="i1" value="" title="0:00:00">
…
but reworked for the first such textbox (and no, using CSS for this is not recommended) using Javascript …
setTimeout(function(){ if (document.getElementById('i0')) { document.getElementById('i0').placeholder='Can ; separate time to flag Just Audio'; } }, 4000);
… to help inform the user of what is possible … ngah ha ha
No | … wait … ~!@#$%%%#@#$^& … ouch
Proof of the delimitation pudding here, we think, is that we could move off yesterday’s …
blast from the past version
… working methodically through the issues, making the current version better too. You will find web browser Web Inspectors invaluable here. We used a “suck it and see” approach to making changes and saw where the Web Inspector took us regarding errors we caused, and fixed those, so that we could apply the better logic to the most recent version of our inhouse YouTube video interfacer.
text to send to Google Translate … is bolstered today via a new modus operandum …
via YouTube video ID 11 character code play a YouTube video
… feeding into our YouTube API work, but resurrecting an inhouse “blast from the past version” close to being okay, on non-mobile, changing YouTube video IDs “midstream” and keeping the video playing work all within the one webpage window. Am sure we had ideas like this in mind using the “Splicing” word in the web application title. We’ve always been a bit obsessed with those train announcements piecing a message together, especially when it comes to “number words” via a combination of audio media snippets.
Also, today, with limited practical success, we’re trying to allow a user to loop through their media list, repeating the playing. There are that many ways we can get interrupted achieving this, it’s not funny … really … but we’re not getting upset because the user can go back to the source window and reclick buttons of their choice to keep the good times rolling … ah, MacArthur Park … again!
If you’ve been following yesterday’s Spliced Audio/Video Styling Tutorial Spliced Media synchronized play project of recent times, you’ll probably guess what our “project word” would be, that being …
duration
… as a “measure” of importance to help with the sequential play of media, out of …
audio
video
image
… choices of “media category” we’re offering in this project. But, what “duration” applies to image choice above? Well, we just hardcode 5 seconds for …
(non-animated) JPEG or PNG or GIF … but …
animated GIFs have a one cycle through duration …
… that we want to help calculate for the user and show in that relevant “end of” timing textbox. Luckily, we’ve researched this in the past, but every scenario is that bit different, we find, and so here is the Javascript for what we’re using …
function prefetch(whatgifmaybe) { // thanks to https://stackoverflow.com/questions/69564118/how-to-get-xxduration-of-gif-image-in-javascript#:~:text=Mainly%20use%20parseGIF()%20%2C%20then,xxduration%20of%20a%20GIF%20image.
if ((whatgifmaybe.toLowerCase().trim().split('#')[0].replace('/gif;', '/gif?;') + '?').indexOf('.gif?') != -1 && lastgifpreq != whatgifmaybe.split('?')[0]) {
lastgifpreq=whatgifmaybe.split('?')[0];
if (whatgifmaybe.indexOf('/tmp/') != -1) {
lastgifurl='/tmp/' + whatgifmaybe.split('/tmp/')[1];
} else {
lastgifurl='';
}
document.body.style.cursor='progress';
whatgifmaybe=whatgifmaybe.split('?')[0];
//alert('whatgifmaybe=' + whatgifmaybe);
Today, we start concertinaing multiple image asks (and yet show image order), using our favourite “reveal” tool, the details/summary HTML element dynamic duo, so that there are less images disappearing “below the fold” happening this way in …
We have “bad hair days”, but that doesn’t stop us seeking “styling days”. Yes, we often separate CSS styling into an issue that is addressed only if we deem the project warrants it, and we’ve decided this latest Spliced Audio/Video web application project is worth it. This means that further to yesterday’s Spliced Audio/Video Browsing Data URL Tutorial we have some new CSS styling in today’s work …
… into (in the second phase of it’s existence within an execution run) a …
linear gradient inspired progress bar … the secret to the “hard stops” we got great new advice from this link, thanks … to come up with dynamic CSS styling via Javascript …
function dstyleit(what) {
var oney=what.split('yellow ')[1].split('%')[0] + '%,';
what=what.replace(oney,oney + oney);
document.getElementById('dstyle').innerHTML+='<style> #subis { background: ' + what + ' } </style>';
return what;
}
browsing for local files off the client operating system environment … and …
a genericized “guise” of web server media files … and …
client File API blob or canvas content media representations
… a point of commonality, and as far as we are concerned, we are always looking to these days, whenever we can, data size permitting, to do away with static web server references, open to so many mixed content and privilege issues.
Perhaps, that is the major reason, these days, we’ve taken more and more to hashtag based means of communication via “a” “mailto:” (email) or “sms:” (SMS) links.
Today proved to us that that, albeit most flexible of all, clientside hashtag approach has its data limitations, that PHP serverside form method=POST does better at, as far as accomodating large amounts of data.
And so, yesterday, with our Splicing Audio or Video inhouse web application, turning more towards data URLs to solve issues, we started the day …
trying hashtag clientside based navigations … but ran into data size issues, so …
introduced into our Splicing Audio or Video inhouse web application, for the first time, PHP serverside involvement …
… that made things start working for us better getting the synchronized play of our user entered audio or video media items performing better.
And so, further to yesterday’s Spliced Audio/Video Data URL Tutorial we added browsing for local media files as a new option for user input, by calling …
function ahere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio here');
//alert('audio here ' + document.getElementById('anaudio').duration);
}
}
function alere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio Here');
}
}
function vhere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video here');
//alert('video here ' + document.getElementById('anvideo').duration);
}
}
function vlere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video Here');
}
}
This works if you can fill in the src attribute of the relevant subelement source element with a suitable data URL (we used the changeddo_away_with_the_boring_bits.php helping PHP to derive). From there, in that event logic an [element].duration is there to help fill out those end of play textboxes in a more automated fashion for the user that wants to use this new functionality, as they fill out the Spliced Media form presented.
Today we’ve written a third draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Spliced Audio/Video/Image Overlay Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
… all like yesterday, but this time we allow you to “seek” or position yourself within the audio and/or video media. We still all “fit” this into GET parameter usage. Are you thinking we are a tad lazy with this approach? Well, perhaps a little, but it also means you can do this job just using clientside HTML and Javascript, without having to involve any serverside code like PHP, and in this day and age, people are much keener on this “just clientside” or “just client looking, plus, perhaps, Javascript serverside code” (ala Node.js) or perhaps “Javascript clientside client code, plus Ajax methodologies”. In any case, it does simplify design to not have to involve a serverside language like PHP … but please don’t think we do not encourage you to learn a serverside language like PHP.
While we are at it here, we continue to think about the mobile device unfriendliness with our current web application, it being, these days, that the setting of the autoplay property for a media object is frowned upon regarding these mobile devices … for reasons of “runaway” unknown charge issues as you can read at this useful link … thanks … and where they quote from Apple …
“Apple has made the decision to disable the automatic playing of video on iOS devices, through both script and attribute implementations.
In Safari, on iOS (for all devices, including iPad), where the user may be on a cellular network and be charged per data unit, preload and auto-play are disabled. No data is loaded until the user initiates it.” – Apple documentation.
A link we’d like to thank regarding the new “seek” or media positioning functionality is this one … thanks.
Also, today, for that sense of symmetry, we start to create the Audio objects from now on using …
document.createElement("AUDIO");
… as this acts the same as new Audio() to the best of our testing.
For your own testing purposes, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. For today’s cake “prepared before the program” we’ve again channelled the GoToMeeting Primer Tutorial which had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, but only seconds 23 through to 47 of the video should play, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic, and hope we can improve mobile device functionality.
Today we’ve written a second draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Splicing Audio Primer Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
The major new change here, apart from the ability to play two media files at once in our synchronized (or “overlayed”) way, is the additional functionality for Video, and we proceeded thinking there’d be an Javascript DOM OOPy method like … var xv = new Video(); … to allow for this, but found out from this useful link … thanks … that an alternative approach for Video object creation, on the fly, is …
var xv = document.createElement("VIDEO");
… curiously. And it took us a while to tweak to the idea that to have a “display home” for the video on the webpage we needed to …
document.body.appendChild(xv);
… which means you need to take care of any HTML form data already filled in, that isn’t that form’s default, when you effectively “refresh” the webpage like this. Essentially though, media on the fly is a modern approach possible fairly easily with just clientside code. Cute, huh?!
Of course, what we still miss here, is the upload from a local place onto the web server, here at RJM Programming, capability, which we may consider in future, and that some of those other synchronization of media themed blog postings of the past, which you may want to read more, for this type of approach.
In the meantime, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. We’ve thought of this one. Do you remember how the GoToMeeting Primer Tutorial had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic.
Today we’ve written a first draft of an HTML and Javascript web application that splices up to nine bits of audio input together that can take either of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
Do you remember, perhaps, when we did a series of blog posts regarding the YouTube API, that finished, so far, with YouTube API Iframe Synchronicity Resizing Tutorial? Well, a lot of what we do today is doing similar sorts of functionalities but just for Audio objects in HTML5. For help on this we’d like to thank this great link. So rather than have HTML audio elements in our HTML, as we first shaped to do, we’ve taken the great advice from this link, and gone all Javascript DOM OOPy on the task, to splice audio media together.
There were three thought patterns going on here for me.
The first was a simulation of those Sydney train public announcements where the timbre of the voice differs a bit between when they say “Platform” and the “6” (or whatever platform it is) that follows. This is pretty obviously computer audio “bits” strung together … and wanted to get somewhere towards that capability.
The second one relates to presentation ideas following up on that “onmouseover” Siri audio enhanced presentation we did at Apple iOS Siri Audio Commentary Tutorial. Well, we think we can do something related to that here, and we’ve prepared this cake audio presentation here, for us, in advance … really, there’s no need for thanks.
The third concerns our eternal media file synchronization quests here at this blog that you may find of interest we hope, here.
Also of interest over time has been the Google Translate Text to Speech functionality that used to be very open, and we now only use around here in an interactive “user clicks” way … but we still use it, because it is very useful, so, thanks. But trying to get this method working for “Platform” and “6” without a yawning gap in between ruins the spontaneity and fun somehow, but there’s nothing stopping you making your own audio files yourself as we did in that Siri tutorial called Apple iOS Siri Audio Commentary Tutorial and take the HTML and Javascript code you could call splice_audio.html from today, and go and make your own web application? Now, is there? Huh?
Some of the motivation for today’s addition of Shuffle logic to the “YouTube Video List of Plays” work we last mentioned with Spliced Audio/Video YouTube Mobile Recall Tutorial of recent times is to do with the huge percentage of time spent, in such a project, running the same test conditions again and again and again, and we were getting a bit bored with the same order of songs each time, for hundreds of tests.
We thought it would be pretty easy to do, but strangely, there were timing issues making it more complicated than we thought it would be for a user who clicks the new Shuffle checkbox that appears when a “YouTube Video List of Plays” is detected. For the non-mobile side of the work, we can point to a new Javascript function that helps with most of the work …
… with us still differentiating the workflow logic between non-mobile and mobile “YouTube Video List of Plays”, though it was tempting to adopt yesterday’s mobile logic for non-mobile! Maybe we will into the future!
Spliced Audio/Video YouTube Mobile Recall Tutorial
Of course we want to be like Spotify, with the one tap meaning peace for long periods playing music, as it is with YouTube playlists on mobile platforms. God knows, we’ve winged enough about this mobile requirement to have a real user tap precede media play. And so, in this context, yesterday’s Spliced Audio/Video YouTube Recall Tutorial‘s non-mobile “playlist” style YouTube video playing was a “walk in the park”.
… we finally arrived at a happy home for our interventional Javascript … as well as a first play intervention. As important as “intervention” is, this intervention needed some “preparatory intervention” that little bit earlier on in time …
Methinks it’s doubtful this avoidance of “all but the first” mobile tap “ask” can be avoided not using the YouTube API. For mobile, with these inhouse …
… we cut out “parent” involvement in any “decision making” sense, except as the place where checkboxes determine what is played, otherwise we know, you risk needing to re-rely on user instigated taps after the first when playing mobile “playlist” video lists sequentially.
… organize themselves with a hashtag based calling URL logic when multiple YouTube videos are being asked to play sequentially. Here is new Javascript featuring in the inhouse YouTube API Video Player via lhchk(”); new call at the document.body onload event …
… we find useful, around here, to describe web page design issues and solutions. They both come into play a lot, at least for us. Mind you, our thoughts may have pared down complication thinking to arrive at these two concept foci.
Today, we’re rewriting the existant Overlay logic from 2016 into our inhouse Spliced Audio/Video/Image web application, to cater for those data URL additional input data functionalities we’ve added with our 2025 revisit.
… template HTML Javascript variables a style=”display:none;” … and curiously, we were going to use style=”” but this actually has more meaning than you’d think, as it seems to create a set of default styling decisions?! … and then at document.body onload event we have …
Another day, another deliberation about delimitation! Yes, as a programmer, of the mere mortal variety, and you ask a bit of your engaged users, you’ll not have much chance of …
changing hardware
changing firmware
changing environment … very much
… achieving your ends … ngah ha ha!
But you have got delimitation on your side.
Down @ …
Wait for ! to finish first …
We do this type of work a lot, and recommend …
get firm idea in the mind of what you want to achieve (yesterday and today, it being the “midstream” ability to change YouTube video ID references using our inhouse YouTube video interfacer) … including …
what might happen into the future … and allow for flexibility should a new piece of functionality happen into the future … and …
the work is often behind the scenes, and the user not interested in odd arrangements, so flag with title hovering (non-mobile) and/or placeholder (non-mobile or mobile) textbox flagging of your preferred usage … though …
behind the scenes, where possible, allow for more flexibility, regarding which delimiter makes the functionality happen, should the user forget, and flounder
generally speaking textbox static HTML like …
<input placeholder="Can | separate time to next YouTube video ID (use ; for just audio)" style="width:400px;" onblur="checkval(this);" type="text" onmouseover="toms(this);" id="i1" name="i1" value="" title="0:00:00">
…
but reworked for the first such textbox (and no, using CSS for this is not recommended) using Javascript …
setTimeout(function(){ if (document.getElementById('i0')) { document.getElementById('i0').placeholder='Can ; separate time to flag Just Audio'; } }, 4000);
… to help inform the user of what is possible … ngah ha ha
No | … wait … ~!@#$%%%#@#$^& … ouch
Proof of the delimitation pudding here, we think, is that we could move off yesterday’s …
blast from the past version
… working methodically through the issues, making the current version better too. You will find web browser Web Inspectors invaluable here. We used a “suck it and see” approach to making changes and saw where the Web Inspector took us regarding errors we caused, and fixed those, so that we could apply the better logic to the most recent version of our inhouse YouTube video interfacer.
text to send to Google Translate … is bolstered today via a new modus operandum …
via YouTube video ID 11 character code play a YouTube video
… feeding into our YouTube API work, but resurrecting an inhouse “blast from the past version” close to being okay, on non-mobile, changing YouTube video IDs “midstream” and keeping the video playing work all within the one webpage window. Am sure we had ideas like this in mind using the “Splicing” word in the web application title. We’ve always been a bit obsessed with those train announcements piecing a message together, especially when it comes to “number words” via a combination of audio media snippets.
Also, today, with limited practical success, we’re trying to allow a user to loop through their media list, repeating the playing. There are that many ways we can get interrupted achieving this, it’s not funny … really … but we’re not getting upset because the user can go back to the source window and reclick buttons of their choice to keep the good times rolling … ah, MacArthur Park … again!
If you’ve been following yesterday’s Spliced Audio/Video Styling Tutorial Spliced Media synchronized play project of recent times, you’ll probably guess what our “project word” would be, that being …
duration
… as a “measure” of importance to help with the sequential play of media, out of …
audio
video
image
… choices of “media category” we’re offering in this project. But, what “duration” applies to image choice above? Well, we just hardcode 5 seconds for …
(non-animated) JPEG or PNG or GIF … but …
animated GIFs have a one cycle through duration …
… that we want to help calculate for the user and show in that relevant “end of” timing textbox. Luckily, we’ve researched this in the past, but every scenario is that bit different, we find, and so here is the Javascript for what we’re using …
function prefetch(whatgifmaybe) { // thanks to https://stackoverflow.com/questions/69564118/how-to-get-xxduration-of-gif-image-in-javascript#:~:text=Mainly%20use%20parseGIF()%20%2C%20then,xxduration%20of%20a%20GIF%20image.
if ((whatgifmaybe.toLowerCase().trim().split('#')[0].replace('/gif;', '/gif?;') + '?').indexOf('.gif?') != -1 && lastgifpreq != whatgifmaybe.split('?')[0]) {
lastgifpreq=whatgifmaybe.split('?')[0];
if (whatgifmaybe.indexOf('/tmp/') != -1) {
lastgifurl='/tmp/' + whatgifmaybe.split('/tmp/')[1];
} else {
lastgifurl='';
}
document.body.style.cursor='progress';
whatgifmaybe=whatgifmaybe.split('?')[0];
//alert('whatgifmaybe=' + whatgifmaybe);
Today, we start concertinaing multiple image asks (and yet show image order), using our favourite “reveal” tool, the details/summary HTML element dynamic duo, so that there are less images disappearing “below the fold” happening this way in …
We have “bad hair days”, but that doesn’t stop us seeking “styling days”. Yes, we often separate CSS styling into an issue that is addressed only if we deem the project warrants it, and we’ve decided this latest Spliced Audio/Video web application project is worth it. This means that further to yesterday’s Spliced Audio/Video Browsing Data URL Tutorial we have some new CSS styling in today’s work …
… into (in the second phase of it’s existence within an execution run) a …
linear gradient inspired progress bar … the secret to the “hard stops” we got great new advice from this link, thanks … to come up with dynamic CSS styling via Javascript …
function dstyleit(what) {
var oney=what.split('yellow ')[1].split('%')[0] + '%,';
what=what.replace(oney,oney + oney);
document.getElementById('dstyle').innerHTML+='<style> #subis { background: ' + what + ' } </style>';
return what;
}
browsing for local files off the client operating system environment … and …
a genericized “guise” of web server media files … and …
client File API blob or canvas content media representations
… a point of commonality, and as far as we are concerned, we are always looking to these days, whenever we can, data size permitting, to do away with static web server references, open to so many mixed content and privilege issues.
Perhaps, that is the major reason, these days, we’ve taken more and more to hashtag based means of communication via “a” “mailto:” (email) or “sms:” (SMS) links.
Today proved to us that that, albeit most flexible of all, clientside hashtag approach has its data limitations, that PHP serverside form method=POST does better at, as far as accomodating large amounts of data.
And so, yesterday, with our Splicing Audio or Video inhouse web application, turning more towards data URLs to solve issues, we started the day …
trying hashtag clientside based navigations … but ran into data size issues, so …
introduced into our Splicing Audio or Video inhouse web application, for the first time, PHP serverside involvement …
… that made things start working for us better getting the synchronized play of our user entered audio or video media items performing better.
And so, further to yesterday’s Spliced Audio/Video Data URL Tutorial we added browsing for local media files as a new option for user input, by calling …
function ahere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio here');
//alert('audio here ' + document.getElementById('anaudio').duration);
}
}
function alere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio Here');
}
}
function vhere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video here');
//alert('video here ' + document.getElementById('anvideo').duration);
}
}
function vlere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video Here');
}
}
This works if you can fill in the src attribute of the relevant subelement source element with a suitable data URL (we used the changeddo_away_with_the_boring_bits.php helping PHP to derive). From there, in that event logic an [element].duration is there to help fill out those end of play textboxes in a more automated fashion for the user that wants to use this new functionality, as they fill out the Spliced Media form presented.
Today we’ve written a third draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Spliced Audio/Video/Image Overlay Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
… all like yesterday, but this time we allow you to “seek” or position yourself within the audio and/or video media. We still all “fit” this into GET parameter usage. Are you thinking we are a tad lazy with this approach? Well, perhaps a little, but it also means you can do this job just using clientside HTML and Javascript, without having to involve any serverside code like PHP, and in this day and age, people are much keener on this “just clientside” or “just client looking, plus, perhaps, Javascript serverside code” (ala Node.js) or perhaps “Javascript clientside client code, plus Ajax methodologies”. In any case, it does simplify design to not have to involve a serverside language like PHP … but please don’t think we do not encourage you to learn a serverside language like PHP.
While we are at it here, we continue to think about the mobile device unfriendliness with our current web application, it being, these days, that the setting of the autoplay property for a media object is frowned upon regarding these mobile devices … for reasons of “runaway” unknown charge issues as you can read at this useful link … thanks … and where they quote from Apple …
“Apple has made the decision to disable the automatic playing of video on iOS devices, through both script and attribute implementations.
In Safari, on iOS (for all devices, including iPad), where the user may be on a cellular network and be charged per data unit, preload and auto-play are disabled. No data is loaded until the user initiates it.” – Apple documentation.
A link we’d like to thank regarding the new “seek” or media positioning functionality is this one … thanks.
Also, today, for that sense of symmetry, we start to create the Audio objects from now on using …
document.createElement("AUDIO");
… as this acts the same as new Audio() to the best of our testing.
For your own testing purposes, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. For today’s cake “prepared before the program” we’ve again channelled the GoToMeeting Primer Tutorial which had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, but only seconds 23 through to 47 of the video should play, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic, and hope we can improve mobile device functionality.
Today we’ve written a second draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Splicing Audio Primer Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
The major new change here, apart from the ability to play two media files at once in our synchronized (or “overlayed”) way, is the additional functionality for Video, and we proceeded thinking there’d be an Javascript DOM OOPy method like … var xv = new Video(); … to allow for this, but found out from this useful link … thanks … that an alternative approach for Video object creation, on the fly, is …
var xv = document.createElement("VIDEO");
… curiously. And it took us a while to tweak to the idea that to have a “display home” for the video on the webpage we needed to …
document.body.appendChild(xv);
… which means you need to take care of any HTML form data already filled in, that isn’t that form’s default, when you effectively “refresh” the webpage like this. Essentially though, media on the fly is a modern approach possible fairly easily with just clientside code. Cute, huh?!
Of course, what we still miss here, is the upload from a local place onto the web server, here at RJM Programming, capability, which we may consider in future, and that some of those other synchronization of media themed blog postings of the past, which you may want to read more, for this type of approach.
In the meantime, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. We’ve thought of this one. Do you remember how the GoToMeeting Primer Tutorial had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic.
Today we’ve written a first draft of an HTML and Javascript web application that splices up to nine bits of audio input together that can take either of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
Do you remember, perhaps, when we did a series of blog posts regarding the YouTube API, that finished, so far, with YouTube API Iframe Synchronicity Resizing Tutorial? Well, a lot of what we do today is doing similar sorts of functionalities but just for Audio objects in HTML5. For help on this we’d like to thank this great link. So rather than have HTML audio elements in our HTML, as we first shaped to do, we’ve taken the great advice from this link, and gone all Javascript DOM OOPy on the task, to splice audio media together.
There were three thought patterns going on here for me.
The first was a simulation of those Sydney train public announcements where the timbre of the voice differs a bit between when they say “Platform” and the “6” (or whatever platform it is) that follows. This is pretty obviously computer audio “bits” strung together … and wanted to get somewhere towards that capability.
The second one relates to presentation ideas following up on that “onmouseover” Siri audio enhanced presentation we did at Apple iOS Siri Audio Commentary Tutorial. Well, we think we can do something related to that here, and we’ve prepared this cake audio presentation here, for us, in advance … really, there’s no need for thanks.
The third concerns our eternal media file synchronization quests here at this blog that you may find of interest we hope, here.
Also of interest over time has been the Google Translate Text to Speech functionality that used to be very open, and we now only use around here in an interactive “user clicks” way … but we still use it, because it is very useful, so, thanks. But trying to get this method working for “Platform” and “6” without a yawning gap in between ruins the spontaneity and fun somehow, but there’s nothing stopping you making your own audio files yourself as we did in that Siri tutorial called Apple iOS Siri Audio Commentary Tutorial and take the HTML and Javascript code you could call splice_audio.html from today, and go and make your own web application? Now, is there? Huh?
Spliced Audio/Video YouTube Mobile Recall Tutorial
Of course we want to be like Spotify, with the one tap meaning peace for long periods playing music, as it is with YouTube playlists on mobile platforms. God knows, we’ve winged enough about this mobile requirement to have a real user tap precede media play. And so, in this context, yesterday’s Spliced Audio/Video YouTube Recall Tutorial‘s non-mobile “playlist” style YouTube video playing was a “walk in the park”.
… we finally arrived at a happy home for our interventional Javascript … as well as a first play intervention. As important as “intervention” is, this intervention needed some “preparatory intervention” that little bit earlier on in time …
Methinks it’s doubtful this avoidance of “all but the first” mobile tap “ask” can be avoided not using the YouTube API. For mobile, with these inhouse …
… we cut out “parent” involvement in any “decision making” sense, except as the place where checkboxes determine what is played, otherwise we know, you risk needing to re-rely on user instigated taps after the first when playing mobile “playlist” video lists sequentially.
… organize themselves with a hashtag based calling URL logic when multiple YouTube videos are being asked to play sequentially. Here is new Javascript featuring in the inhouse YouTube API Video Player via lhchk(”); new call at the document.body onload event …
… we find useful, around here, to describe web page design issues and solutions. They both come into play a lot, at least for us. Mind you, our thoughts may have pared down complication thinking to arrive at these two concept foci.
Today, we’re rewriting the existant Overlay logic from 2016 into our inhouse Spliced Audio/Video/Image web application, to cater for those data URL additional input data functionalities we’ve added with our 2025 revisit.
… template HTML Javascript variables a style=”display:none;” … and curiously, we were going to use style=”” but this actually has more meaning than you’d think, as it seems to create a set of default styling decisions?! … and then at document.body onload event we have …
Another day, another deliberation about delimitation! Yes, as a programmer, of the mere mortal variety, and you ask a bit of your engaged users, you’ll not have much chance of …
changing hardware
changing firmware
changing environment … very much
… achieving your ends … ngah ha ha!
But you have got delimitation on your side.
Down @ …
Wait for ! to finish first …
We do this type of work a lot, and recommend …
get firm idea in the mind of what you want to achieve (yesterday and today, it being the “midstream” ability to change YouTube video ID references using our inhouse YouTube video interfacer) … including …
what might happen into the future … and allow for flexibility should a new piece of functionality happen into the future … and …
the work is often behind the scenes, and the user not interested in odd arrangements, so flag with title hovering (non-mobile) and/or placeholder (non-mobile or mobile) textbox flagging of your preferred usage … though …
behind the scenes, where possible, allow for more flexibility, regarding which delimiter makes the functionality happen, should the user forget, and flounder
generally speaking textbox static HTML like …
<input placeholder="Can | separate time to next YouTube video ID (use ; for just audio)" style="width:400px;" onblur="checkval(this);" type="text" onmouseover="toms(this);" id="i1" name="i1" value="" title="0:00:00">
…
but reworked for the first such textbox (and no, using CSS for this is not recommended) using Javascript …
setTimeout(function(){ if (document.getElementById('i0')) { document.getElementById('i0').placeholder='Can ; separate time to flag Just Audio'; } }, 4000);
… to help inform the user of what is possible … ngah ha ha
No | … wait … ~!@#$%%%#@#$^& … ouch
Proof of the delimitation pudding here, we think, is that we could move off yesterday’s …
blast from the past version
… working methodically through the issues, making the current version better too. You will find web browser Web Inspectors invaluable here. We used a “suck it and see” approach to making changes and saw where the Web Inspector took us regarding errors we caused, and fixed those, so that we could apply the better logic to the most recent version of our inhouse YouTube video interfacer.
text to send to Google Translate … is bolstered today via a new modus operandum …
via YouTube video ID 11 character code play a YouTube video
… feeding into our YouTube API work, but resurrecting an inhouse “blast from the past version” close to being okay, on non-mobile, changing YouTube video IDs “midstream” and keeping the video playing work all within the one webpage window. Am sure we had ideas like this in mind using the “Splicing” word in the web application title. We’ve always been a bit obsessed with those train announcements piecing a message together, especially when it comes to “number words” via a combination of audio media snippets.
Also, today, with limited practical success, we’re trying to allow a user to loop through their media list, repeating the playing. There are that many ways we can get interrupted achieving this, it’s not funny … really … but we’re not getting upset because the user can go back to the source window and reclick buttons of their choice to keep the good times rolling … ah, MacArthur Park … again!
If you’ve been following yesterday’s Spliced Audio/Video Styling Tutorial Spliced Media synchronized play project of recent times, you’ll probably guess what our “project word” would be, that being …
duration
… as a “measure” of importance to help with the sequential play of media, out of …
audio
video
image
… choices of “media category” we’re offering in this project. But, what “duration” applies to image choice above? Well, we just hardcode 5 seconds for …
(non-animated) JPEG or PNG or GIF … but …
animated GIFs have a one cycle through duration …
… that we want to help calculate for the user and show in that relevant “end of” timing textbox. Luckily, we’ve researched this in the past, but every scenario is that bit different, we find, and so here is the Javascript for what we’re using …
function prefetch(whatgifmaybe) { // thanks to https://stackoverflow.com/questions/69564118/how-to-get-xxduration-of-gif-image-in-javascript#:~:text=Mainly%20use%20parseGIF()%20%2C%20then,xxduration%20of%20a%20GIF%20image.
if ((whatgifmaybe.toLowerCase().trim().split('#')[0].replace('/gif;', '/gif?;') + '?').indexOf('.gif?') != -1 && lastgifpreq != whatgifmaybe.split('?')[0]) {
lastgifpreq=whatgifmaybe.split('?')[0];
if (whatgifmaybe.indexOf('/tmp/') != -1) {
lastgifurl='/tmp/' + whatgifmaybe.split('/tmp/')[1];
} else {
lastgifurl='';
}
document.body.style.cursor='progress';
whatgifmaybe=whatgifmaybe.split('?')[0];
//alert('whatgifmaybe=' + whatgifmaybe);
Today, we start concertinaing multiple image asks (and yet show image order), using our favourite “reveal” tool, the details/summary HTML element dynamic duo, so that there are less images disappearing “below the fold” happening this way in …
We have “bad hair days”, but that doesn’t stop us seeking “styling days”. Yes, we often separate CSS styling into an issue that is addressed only if we deem the project warrants it, and we’ve decided this latest Spliced Audio/Video web application project is worth it. This means that further to yesterday’s Spliced Audio/Video Browsing Data URL Tutorial we have some new CSS styling in today’s work …
… into (in the second phase of it’s existence within an execution run) a …
linear gradient inspired progress bar … the secret to the “hard stops” we got great new advice from this link, thanks … to come up with dynamic CSS styling via Javascript …
function dstyleit(what) {
var oney=what.split('yellow ')[1].split('%')[0] + '%,';
what=what.replace(oney,oney + oney);
document.getElementById('dstyle').innerHTML+='<style> #subis { background: ' + what + ' } </style>';
return what;
}
browsing for local files off the client operating system environment … and …
a genericized “guise” of web server media files … and …
client File API blob or canvas content media representations
… a point of commonality, and as far as we are concerned, we are always looking to these days, whenever we can, data size permitting, to do away with static web server references, open to so many mixed content and privilege issues.
Perhaps, that is the major reason, these days, we’ve taken more and more to hashtag based means of communication via “a” “mailto:” (email) or “sms:” (SMS) links.
Today proved to us that that, albeit most flexible of all, clientside hashtag approach has its data limitations, that PHP serverside form method=POST does better at, as far as accomodating large amounts of data.
And so, yesterday, with our Splicing Audio or Video inhouse web application, turning more towards data URLs to solve issues, we started the day …
trying hashtag clientside based navigations … but ran into data size issues, so …
introduced into our Splicing Audio or Video inhouse web application, for the first time, PHP serverside involvement …
… that made things start working for us better getting the synchronized play of our user entered audio or video media items performing better.
And so, further to yesterday’s Spliced Audio/Video Data URL Tutorial we added browsing for local media files as a new option for user input, by calling …
function ahere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio here');
//alert('audio here ' + document.getElementById('anaudio').duration);
}
}
function alere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio Here');
}
}
function vhere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video here');
//alert('video here ' + document.getElementById('anvideo').duration);
}
}
function vlere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video Here');
}
}
This works if you can fill in the src attribute of the relevant subelement source element with a suitable data URL (we used the changeddo_away_with_the_boring_bits.php helping PHP to derive). From there, in that event logic an [element].duration is there to help fill out those end of play textboxes in a more automated fashion for the user that wants to use this new functionality, as they fill out the Spliced Media form presented.
Today we’ve written a third draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Spliced Audio/Video/Image Overlay Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
… all like yesterday, but this time we allow you to “seek” or position yourself within the audio and/or video media. We still all “fit” this into GET parameter usage. Are you thinking we are a tad lazy with this approach? Well, perhaps a little, but it also means you can do this job just using clientside HTML and Javascript, without having to involve any serverside code like PHP, and in this day and age, people are much keener on this “just clientside” or “just client looking, plus, perhaps, Javascript serverside code” (ala Node.js) or perhaps “Javascript clientside client code, plus Ajax methodologies”. In any case, it does simplify design to not have to involve a serverside language like PHP … but please don’t think we do not encourage you to learn a serverside language like PHP.
While we are at it here, we continue to think about the mobile device unfriendliness with our current web application, it being, these days, that the setting of the autoplay property for a media object is frowned upon regarding these mobile devices … for reasons of “runaway” unknown charge issues as you can read at this useful link … thanks … and where they quote from Apple …
“Apple has made the decision to disable the automatic playing of video on iOS devices, through both script and attribute implementations.
In Safari, on iOS (for all devices, including iPad), where the user may be on a cellular network and be charged per data unit, preload and auto-play are disabled. No data is loaded until the user initiates it.” – Apple documentation.
A link we’d like to thank regarding the new “seek” or media positioning functionality is this one … thanks.
Also, today, for that sense of symmetry, we start to create the Audio objects from now on using …
document.createElement("AUDIO");
… as this acts the same as new Audio() to the best of our testing.
For your own testing purposes, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. For today’s cake “prepared before the program” we’ve again channelled the GoToMeeting Primer Tutorial which had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, but only seconds 23 through to 47 of the video should play, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic, and hope we can improve mobile device functionality.
Today we’ve written a second draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Splicing Audio Primer Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
The major new change here, apart from the ability to play two media files at once in our synchronized (or “overlayed”) way, is the additional functionality for Video, and we proceeded thinking there’d be an Javascript DOM OOPy method like … var xv = new Video(); … to allow for this, but found out from this useful link … thanks … that an alternative approach for Video object creation, on the fly, is …
var xv = document.createElement("VIDEO");
… curiously. And it took us a while to tweak to the idea that to have a “display home” for the video on the webpage we needed to …
document.body.appendChild(xv);
… which means you need to take care of any HTML form data already filled in, that isn’t that form’s default, when you effectively “refresh” the webpage like this. Essentially though, media on the fly is a modern approach possible fairly easily with just clientside code. Cute, huh?!
Of course, what we still miss here, is the upload from a local place onto the web server, here at RJM Programming, capability, which we may consider in future, and that some of those other synchronization of media themed blog postings of the past, which you may want to read more, for this type of approach.
In the meantime, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. We’ve thought of this one. Do you remember how the GoToMeeting Primer Tutorial had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic.
Today we’ve written a first draft of an HTML and Javascript web application that splices up to nine bits of audio input together that can take either of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
Do you remember, perhaps, when we did a series of blog posts regarding the YouTube API, that finished, so far, with YouTube API Iframe Synchronicity Resizing Tutorial? Well, a lot of what we do today is doing similar sorts of functionalities but just for Audio objects in HTML5. For help on this we’d like to thank this great link. So rather than have HTML audio elements in our HTML, as we first shaped to do, we’ve taken the great advice from this link, and gone all Javascript DOM OOPy on the task, to splice audio media together.
There were three thought patterns going on here for me.
The first was a simulation of those Sydney train public announcements where the timbre of the voice differs a bit between when they say “Platform” and the “6” (or whatever platform it is) that follows. This is pretty obviously computer audio “bits” strung together … and wanted to get somewhere towards that capability.
The second one relates to presentation ideas following up on that “onmouseover” Siri audio enhanced presentation we did at Apple iOS Siri Audio Commentary Tutorial. Well, we think we can do something related to that here, and we’ve prepared this cake audio presentation here, for us, in advance … really, there’s no need for thanks.
The third concerns our eternal media file synchronization quests here at this blog that you may find of interest we hope, here.
Also of interest over time has been the Google Translate Text to Speech functionality that used to be very open, and we now only use around here in an interactive “user clicks” way … but we still use it, because it is very useful, so, thanks. But trying to get this method working for “Platform” and “6” without a yawning gap in between ruins the spontaneity and fun somehow, but there’s nothing stopping you making your own audio files yourself as we did in that Siri tutorial called Apple iOS Siri Audio Commentary Tutorial and take the HTML and Javascript code you could call splice_audio.html from today, and go and make your own web application? Now, is there? Huh?
… organize themselves with a hashtag based calling URL logic when multiple YouTube videos are being asked to play sequentially. Here is new Javascript featuring in the inhouse YouTube API Video Player via lhchk(”); new call at the document.body onload event …
… we find useful, around here, to describe web page design issues and solutions. They both come into play a lot, at least for us. Mind you, our thoughts may have pared down complication thinking to arrive at these two concept foci.
Today, we’re rewriting the existant Overlay logic from 2016 into our inhouse Spliced Audio/Video/Image web application, to cater for those data URL additional input data functionalities we’ve added with our 2025 revisit.
… template HTML Javascript variables a style=”display:none;” … and curiously, we were going to use style=”” but this actually has more meaning than you’d think, as it seems to create a set of default styling decisions?! … and then at document.body onload event we have …
Another day, another deliberation about delimitation! Yes, as a programmer, of the mere mortal variety, and you ask a bit of your engaged users, you’ll not have much chance of …
changing hardware
changing firmware
changing environment … very much
… achieving your ends … ngah ha ha!
But you have got delimitation on your side.
Down @ …
Wait for ! to finish first …
We do this type of work a lot, and recommend …
get firm idea in the mind of what you want to achieve (yesterday and today, it being the “midstream” ability to change YouTube video ID references using our inhouse YouTube video interfacer) … including …
what might happen into the future … and allow for flexibility should a new piece of functionality happen into the future … and …
the work is often behind the scenes, and the user not interested in odd arrangements, so flag with title hovering (non-mobile) and/or placeholder (non-mobile or mobile) textbox flagging of your preferred usage … though …
behind the scenes, where possible, allow for more flexibility, regarding which delimiter makes the functionality happen, should the user forget, and flounder
generally speaking textbox static HTML like …
<input placeholder="Can | separate time to next YouTube video ID (use ; for just audio)" style="width:400px;" onblur="checkval(this);" type="text" onmouseover="toms(this);" id="i1" name="i1" value="" title="0:00:00">
…
but reworked for the first such textbox (and no, using CSS for this is not recommended) using Javascript …
setTimeout(function(){ if (document.getElementById('i0')) { document.getElementById('i0').placeholder='Can ; separate time to flag Just Audio'; } }, 4000);
… to help inform the user of what is possible … ngah ha ha
No | … wait … ~!@#$%%%#@#$^& … ouch
Proof of the delimitation pudding here, we think, is that we could move off yesterday’s …
blast from the past version
… working methodically through the issues, making the current version better too. You will find web browser Web Inspectors invaluable here. We used a “suck it and see” approach to making changes and saw where the Web Inspector took us regarding errors we caused, and fixed those, so that we could apply the better logic to the most recent version of our inhouse YouTube video interfacer.
text to send to Google Translate … is bolstered today via a new modus operandum …
via YouTube video ID 11 character code play a YouTube video
… feeding into our YouTube API work, but resurrecting an inhouse “blast from the past version” close to being okay, on non-mobile, changing YouTube video IDs “midstream” and keeping the video playing work all within the one webpage window. Am sure we had ideas like this in mind using the “Splicing” word in the web application title. We’ve always been a bit obsessed with those train announcements piecing a message together, especially when it comes to “number words” via a combination of audio media snippets.
Also, today, with limited practical success, we’re trying to allow a user to loop through their media list, repeating the playing. There are that many ways we can get interrupted achieving this, it’s not funny … really … but we’re not getting upset because the user can go back to the source window and reclick buttons of their choice to keep the good times rolling … ah, MacArthur Park … again!
If you’ve been following yesterday’s Spliced Audio/Video Styling Tutorial Spliced Media synchronized play project of recent times, you’ll probably guess what our “project word” would be, that being …
duration
… as a “measure” of importance to help with the sequential play of media, out of …
audio
video
image
… choices of “media category” we’re offering in this project. But, what “duration” applies to image choice above? Well, we just hardcode 5 seconds for …
(non-animated) JPEG or PNG or GIF … but …
animated GIFs have a one cycle through duration …
… that we want to help calculate for the user and show in that relevant “end of” timing textbox. Luckily, we’ve researched this in the past, but every scenario is that bit different, we find, and so here is the Javascript for what we’re using …
function prefetch(whatgifmaybe) { // thanks to https://stackoverflow.com/questions/69564118/how-to-get-xxduration-of-gif-image-in-javascript#:~:text=Mainly%20use%20parseGIF()%20%2C%20then,xxduration%20of%20a%20GIF%20image.
if ((whatgifmaybe.toLowerCase().trim().split('#')[0].replace('/gif;', '/gif?;') + '?').indexOf('.gif?') != -1 && lastgifpreq != whatgifmaybe.split('?')[0]) {
lastgifpreq=whatgifmaybe.split('?')[0];
if (whatgifmaybe.indexOf('/tmp/') != -1) {
lastgifurl='/tmp/' + whatgifmaybe.split('/tmp/')[1];
} else {
lastgifurl='';
}
document.body.style.cursor='progress';
whatgifmaybe=whatgifmaybe.split('?')[0];
//alert('whatgifmaybe=' + whatgifmaybe);
Today, we start concertinaing multiple image asks (and yet show image order), using our favourite “reveal” tool, the details/summary HTML element dynamic duo, so that there are less images disappearing “below the fold” happening this way in …
We have “bad hair days”, but that doesn’t stop us seeking “styling days”. Yes, we often separate CSS styling into an issue that is addressed only if we deem the project warrants it, and we’ve decided this latest Spliced Audio/Video web application project is worth it. This means that further to yesterday’s Spliced Audio/Video Browsing Data URL Tutorial we have some new CSS styling in today’s work …
… into (in the second phase of it’s existence within an execution run) a …
linear gradient inspired progress bar … the secret to the “hard stops” we got great new advice from this link, thanks … to come up with dynamic CSS styling via Javascript …
function dstyleit(what) {
var oney=what.split('yellow ')[1].split('%')[0] + '%,';
what=what.replace(oney,oney + oney);
document.getElementById('dstyle').innerHTML+='<style> #subis { background: ' + what + ' } </style>';
return what;
}
browsing for local files off the client operating system environment … and …
a genericized “guise” of web server media files … and …
client File API blob or canvas content media representations
… a point of commonality, and as far as we are concerned, we are always looking to these days, whenever we can, data size permitting, to do away with static web server references, open to so many mixed content and privilege issues.
Perhaps, that is the major reason, these days, we’ve taken more and more to hashtag based means of communication via “a” “mailto:” (email) or “sms:” (SMS) links.
Today proved to us that that, albeit most flexible of all, clientside hashtag approach has its data limitations, that PHP serverside form method=POST does better at, as far as accomodating large amounts of data.
And so, yesterday, with our Splicing Audio or Video inhouse web application, turning more towards data URLs to solve issues, we started the day …
trying hashtag clientside based navigations … but ran into data size issues, so …
introduced into our Splicing Audio or Video inhouse web application, for the first time, PHP serverside involvement …
… that made things start working for us better getting the synchronized play of our user entered audio or video media items performing better.
And so, further to yesterday’s Spliced Audio/Video Data URL Tutorial we added browsing for local media files as a new option for user input, by calling …
function ahere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio here');
//alert('audio here ' + document.getElementById('anaudio').duration);
}
}
function alere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio Here');
}
}
function vhere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video here');
//alert('video here ' + document.getElementById('anvideo').duration);
}
}
function vlere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video Here');
}
}
This works if you can fill in the src attribute of the relevant subelement source element with a suitable data URL (we used the changeddo_away_with_the_boring_bits.php helping PHP to derive). From there, in that event logic an [element].duration is there to help fill out those end of play textboxes in a more automated fashion for the user that wants to use this new functionality, as they fill out the Spliced Media form presented.
Today we’ve written a third draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Spliced Audio/Video/Image Overlay Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
… all like yesterday, but this time we allow you to “seek” or position yourself within the audio and/or video media. We still all “fit” this into GET parameter usage. Are you thinking we are a tad lazy with this approach? Well, perhaps a little, but it also means you can do this job just using clientside HTML and Javascript, without having to involve any serverside code like PHP, and in this day and age, people are much keener on this “just clientside” or “just client looking, plus, perhaps, Javascript serverside code” (ala Node.js) or perhaps “Javascript clientside client code, plus Ajax methodologies”. In any case, it does simplify design to not have to involve a serverside language like PHP … but please don’t think we do not encourage you to learn a serverside language like PHP.
While we are at it here, we continue to think about the mobile device unfriendliness with our current web application, it being, these days, that the setting of the autoplay property for a media object is frowned upon regarding these mobile devices … for reasons of “runaway” unknown charge issues as you can read at this useful link … thanks … and where they quote from Apple …
“Apple has made the decision to disable the automatic playing of video on iOS devices, through both script and attribute implementations.
In Safari, on iOS (for all devices, including iPad), where the user may be on a cellular network and be charged per data unit, preload and auto-play are disabled. No data is loaded until the user initiates it.” – Apple documentation.
A link we’d like to thank regarding the new “seek” or media positioning functionality is this one … thanks.
Also, today, for that sense of symmetry, we start to create the Audio objects from now on using …
document.createElement("AUDIO");
… as this acts the same as new Audio() to the best of our testing.
For your own testing purposes, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. For today’s cake “prepared before the program” we’ve again channelled the GoToMeeting Primer Tutorial which had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, but only seconds 23 through to 47 of the video should play, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic, and hope we can improve mobile device functionality.
Today we’ve written a second draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Splicing Audio Primer Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
The major new change here, apart from the ability to play two media files at once in our synchronized (or “overlayed”) way, is the additional functionality for Video, and we proceeded thinking there’d be an Javascript DOM OOPy method like … var xv = new Video(); … to allow for this, but found out from this useful link … thanks … that an alternative approach for Video object creation, on the fly, is …
var xv = document.createElement("VIDEO");
… curiously. And it took us a while to tweak to the idea that to have a “display home” for the video on the webpage we needed to …
document.body.appendChild(xv);
… which means you need to take care of any HTML form data already filled in, that isn’t that form’s default, when you effectively “refresh” the webpage like this. Essentially though, media on the fly is a modern approach possible fairly easily with just clientside code. Cute, huh?!
Of course, what we still miss here, is the upload from a local place onto the web server, here at RJM Programming, capability, which we may consider in future, and that some of those other synchronization of media themed blog postings of the past, which you may want to read more, for this type of approach.
In the meantime, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. We’ve thought of this one. Do you remember how the GoToMeeting Primer Tutorial had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic.
Today we’ve written a first draft of an HTML and Javascript web application that splices up to nine bits of audio input together that can take either of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
Do you remember, perhaps, when we did a series of blog posts regarding the YouTube API, that finished, so far, with YouTube API Iframe Synchronicity Resizing Tutorial? Well, a lot of what we do today is doing similar sorts of functionalities but just for Audio objects in HTML5. For help on this we’d like to thank this great link. So rather than have HTML audio elements in our HTML, as we first shaped to do, we’ve taken the great advice from this link, and gone all Javascript DOM OOPy on the task, to splice audio media together.
There were three thought patterns going on here for me.
The first was a simulation of those Sydney train public announcements where the timbre of the voice differs a bit between when they say “Platform” and the “6” (or whatever platform it is) that follows. This is pretty obviously computer audio “bits” strung together … and wanted to get somewhere towards that capability.
The second one relates to presentation ideas following up on that “onmouseover” Siri audio enhanced presentation we did at Apple iOS Siri Audio Commentary Tutorial. Well, we think we can do something related to that here, and we’ve prepared this cake audio presentation here, for us, in advance … really, there’s no need for thanks.
The third concerns our eternal media file synchronization quests here at this blog that you may find of interest we hope, here.
Also of interest over time has been the Google Translate Text to Speech functionality that used to be very open, and we now only use around here in an interactive “user clicks” way … but we still use it, because it is very useful, so, thanks. But trying to get this method working for “Platform” and “6” without a yawning gap in between ruins the spontaneity and fun somehow, but there’s nothing stopping you making your own audio files yourself as we did in that Siri tutorial called Apple iOS Siri Audio Commentary Tutorial and take the HTML and Javascript code you could call splice_audio.html from today, and go and make your own web application? Now, is there? Huh?
… we find useful, around here, to describe web page design issues and solutions. They both come into play a lot, at least for us. Mind you, our thoughts may have pared down complication thinking to arrive at these two concept foci.
Today, we’re rewriting the existant Overlay logic from 2016 into our inhouse Spliced Audio/Video/Image web application, to cater for those data URL additional input data functionalities we’ve added with our 2025 revisit.
… template HTML Javascript variables a style=”display:none;” … and curiously, we were going to use style=”” but this actually has more meaning than you’d think, as it seems to create a set of default styling decisions?! … and then at document.body onload event we have …
Another day, another deliberation about delimitation! Yes, as a programmer, of the mere mortal variety, and you ask a bit of your engaged users, you’ll not have much chance of …
changing hardware
changing firmware
changing environment … very much
… achieving your ends … ngah ha ha!
But you have got delimitation on your side.
Down @ …
Wait for ! to finish first …
We do this type of work a lot, and recommend …
get firm idea in the mind of what you want to achieve (yesterday and today, it being the “midstream” ability to change YouTube video ID references using our inhouse YouTube video interfacer) … including …
what might happen into the future … and allow for flexibility should a new piece of functionality happen into the future … and …
the work is often behind the scenes, and the user not interested in odd arrangements, so flag with title hovering (non-mobile) and/or placeholder (non-mobile or mobile) textbox flagging of your preferred usage … though …
behind the scenes, where possible, allow for more flexibility, regarding which delimiter makes the functionality happen, should the user forget, and flounder
generally speaking textbox static HTML like …
<input placeholder="Can | separate time to next YouTube video ID (use ; for just audio)" style="width:400px;" onblur="checkval(this);" type="text" onmouseover="toms(this);" id="i1" name="i1" value="" title="0:00:00">
…
but reworked for the first such textbox (and no, using CSS for this is not recommended) using Javascript …
setTimeout(function(){ if (document.getElementById('i0')) { document.getElementById('i0').placeholder='Can ; separate time to flag Just Audio'; } }, 4000);
… to help inform the user of what is possible … ngah ha ha
No | … wait … ~!@#$%%%#@#$^& … ouch
Proof of the delimitation pudding here, we think, is that we could move off yesterday’s …
blast from the past version
… working methodically through the issues, making the current version better too. You will find web browser Web Inspectors invaluable here. We used a “suck it and see” approach to making changes and saw where the Web Inspector took us regarding errors we caused, and fixed those, so that we could apply the better logic to the most recent version of our inhouse YouTube video interfacer.
text to send to Google Translate … is bolstered today via a new modus operandum …
via YouTube video ID 11 character code play a YouTube video
… feeding into our YouTube API work, but resurrecting an inhouse “blast from the past version” close to being okay, on non-mobile, changing YouTube video IDs “midstream” and keeping the video playing work all within the one webpage window. Am sure we had ideas like this in mind using the “Splicing” word in the web application title. We’ve always been a bit obsessed with those train announcements piecing a message together, especially when it comes to “number words” via a combination of audio media snippets.
Also, today, with limited practical success, we’re trying to allow a user to loop through their media list, repeating the playing. There are that many ways we can get interrupted achieving this, it’s not funny … really … but we’re not getting upset because the user can go back to the source window and reclick buttons of their choice to keep the good times rolling … ah, MacArthur Park … again!
If you’ve been following yesterday’s Spliced Audio/Video Styling Tutorial Spliced Media synchronized play project of recent times, you’ll probably guess what our “project word” would be, that being …
duration
… as a “measure” of importance to help with the sequential play of media, out of …
audio
video
image
… choices of “media category” we’re offering in this project. But, what “duration” applies to image choice above? Well, we just hardcode 5 seconds for …
(non-animated) JPEG or PNG or GIF … but …
animated GIFs have a one cycle through duration …
… that we want to help calculate for the user and show in that relevant “end of” timing textbox. Luckily, we’ve researched this in the past, but every scenario is that bit different, we find, and so here is the Javascript for what we’re using …
function prefetch(whatgifmaybe) { // thanks to https://stackoverflow.com/questions/69564118/how-to-get-xxduration-of-gif-image-in-javascript#:~:text=Mainly%20use%20parseGIF()%20%2C%20then,xxduration%20of%20a%20GIF%20image.
if ((whatgifmaybe.toLowerCase().trim().split('#')[0].replace('/gif;', '/gif?;') + '?').indexOf('.gif?') != -1 && lastgifpreq != whatgifmaybe.split('?')[0]) {
lastgifpreq=whatgifmaybe.split('?')[0];
if (whatgifmaybe.indexOf('/tmp/') != -1) {
lastgifurl='/tmp/' + whatgifmaybe.split('/tmp/')[1];
} else {
lastgifurl='';
}
document.body.style.cursor='progress';
whatgifmaybe=whatgifmaybe.split('?')[0];
//alert('whatgifmaybe=' + whatgifmaybe);
Today, we start concertinaing multiple image asks (and yet show image order), using our favourite “reveal” tool, the details/summary HTML element dynamic duo, so that there are less images disappearing “below the fold” happening this way in …
We have “bad hair days”, but that doesn’t stop us seeking “styling days”. Yes, we often separate CSS styling into an issue that is addressed only if we deem the project warrants it, and we’ve decided this latest Spliced Audio/Video web application project is worth it. This means that further to yesterday’s Spliced Audio/Video Browsing Data URL Tutorial we have some new CSS styling in today’s work …
… into (in the second phase of it’s existence within an execution run) a …
linear gradient inspired progress bar … the secret to the “hard stops” we got great new advice from this link, thanks … to come up with dynamic CSS styling via Javascript …
function dstyleit(what) {
var oney=what.split('yellow ')[1].split('%')[0] + '%,';
what=what.replace(oney,oney + oney);
document.getElementById('dstyle').innerHTML+='<style> #subis { background: ' + what + ' } </style>';
return what;
}
browsing for local files off the client operating system environment … and …
a genericized “guise” of web server media files … and …
client File API blob or canvas content media representations
… a point of commonality, and as far as we are concerned, we are always looking to these days, whenever we can, data size permitting, to do away with static web server references, open to so many mixed content and privilege issues.
Perhaps, that is the major reason, these days, we’ve taken more and more to hashtag based means of communication via “a” “mailto:” (email) or “sms:” (SMS) links.
Today proved to us that that, albeit most flexible of all, clientside hashtag approach has its data limitations, that PHP serverside form method=POST does better at, as far as accomodating large amounts of data.
And so, yesterday, with our Splicing Audio or Video inhouse web application, turning more towards data URLs to solve issues, we started the day …
trying hashtag clientside based navigations … but ran into data size issues, so …
introduced into our Splicing Audio or Video inhouse web application, for the first time, PHP serverside involvement …
… that made things start working for us better getting the synchronized play of our user entered audio or video media items performing better.
And so, further to yesterday’s Spliced Audio/Video Data URL Tutorial we added browsing for local media files as a new option for user input, by calling …
function ahere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio here');
//alert('audio here ' + document.getElementById('anaudio').duration);
}
}
function alere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio Here');
}
}
function vhere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video here');
//alert('video here ' + document.getElementById('anvideo').duration);
}
}
function vlere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video Here');
}
}
This works if you can fill in the src attribute of the relevant subelement source element with a suitable data URL (we used the changeddo_away_with_the_boring_bits.php helping PHP to derive). From there, in that event logic an [element].duration is there to help fill out those end of play textboxes in a more automated fashion for the user that wants to use this new functionality, as they fill out the Spliced Media form presented.
Today we’ve written a third draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Spliced Audio/Video/Image Overlay Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
… all like yesterday, but this time we allow you to “seek” or position yourself within the audio and/or video media. We still all “fit” this into GET parameter usage. Are you thinking we are a tad lazy with this approach? Well, perhaps a little, but it also means you can do this job just using clientside HTML and Javascript, without having to involve any serverside code like PHP, and in this day and age, people are much keener on this “just clientside” or “just client looking, plus, perhaps, Javascript serverside code” (ala Node.js) or perhaps “Javascript clientside client code, plus Ajax methodologies”. In any case, it does simplify design to not have to involve a serverside language like PHP … but please don’t think we do not encourage you to learn a serverside language like PHP.
While we are at it here, we continue to think about the mobile device unfriendliness with our current web application, it being, these days, that the setting of the autoplay property for a media object is frowned upon regarding these mobile devices … for reasons of “runaway” unknown charge issues as you can read at this useful link … thanks … and where they quote from Apple …
“Apple has made the decision to disable the automatic playing of video on iOS devices, through both script and attribute implementations.
In Safari, on iOS (for all devices, including iPad), where the user may be on a cellular network and be charged per data unit, preload and auto-play are disabled. No data is loaded until the user initiates it.” – Apple documentation.
A link we’d like to thank regarding the new “seek” or media positioning functionality is this one … thanks.
Also, today, for that sense of symmetry, we start to create the Audio objects from now on using …
document.createElement("AUDIO");
… as this acts the same as new Audio() to the best of our testing.
For your own testing purposes, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. For today’s cake “prepared before the program” we’ve again channelled the GoToMeeting Primer Tutorial which had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, but only seconds 23 through to 47 of the video should play, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic, and hope we can improve mobile device functionality.
Today we’ve written a second draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Splicing Audio Primer Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
The major new change here, apart from the ability to play two media files at once in our synchronized (or “overlayed”) way, is the additional functionality for Video, and we proceeded thinking there’d be an Javascript DOM OOPy method like … var xv = new Video(); … to allow for this, but found out from this useful link … thanks … that an alternative approach for Video object creation, on the fly, is …
var xv = document.createElement("VIDEO");
… curiously. And it took us a while to tweak to the idea that to have a “display home” for the video on the webpage we needed to …
document.body.appendChild(xv);
… which means you need to take care of any HTML form data already filled in, that isn’t that form’s default, when you effectively “refresh” the webpage like this. Essentially though, media on the fly is a modern approach possible fairly easily with just clientside code. Cute, huh?!
Of course, what we still miss here, is the upload from a local place onto the web server, here at RJM Programming, capability, which we may consider in future, and that some of those other synchronization of media themed blog postings of the past, which you may want to read more, for this type of approach.
In the meantime, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. We’ve thought of this one. Do you remember how the GoToMeeting Primer Tutorial had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic.
Today we’ve written a first draft of an HTML and Javascript web application that splices up to nine bits of audio input together that can take either of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
Do you remember, perhaps, when we did a series of blog posts regarding the YouTube API, that finished, so far, with YouTube API Iframe Synchronicity Resizing Tutorial? Well, a lot of what we do today is doing similar sorts of functionalities but just for Audio objects in HTML5. For help on this we’d like to thank this great link. So rather than have HTML audio elements in our HTML, as we first shaped to do, we’ve taken the great advice from this link, and gone all Javascript DOM OOPy on the task, to splice audio media together.
There were three thought patterns going on here for me.
The first was a simulation of those Sydney train public announcements where the timbre of the voice differs a bit between when they say “Platform” and the “6” (or whatever platform it is) that follows. This is pretty obviously computer audio “bits” strung together … and wanted to get somewhere towards that capability.
The second one relates to presentation ideas following up on that “onmouseover” Siri audio enhanced presentation we did at Apple iOS Siri Audio Commentary Tutorial. Well, we think we can do something related to that here, and we’ve prepared this cake audio presentation here, for us, in advance … really, there’s no need for thanks.
The third concerns our eternal media file synchronization quests here at this blog that you may find of interest we hope, here.
Also of interest over time has been the Google Translate Text to Speech functionality that used to be very open, and we now only use around here in an interactive “user clicks” way … but we still use it, because it is very useful, so, thanks. But trying to get this method working for “Platform” and “6” without a yawning gap in between ruins the spontaneity and fun somehow, but there’s nothing stopping you making your own audio files yourself as we did in that Siri tutorial called Apple iOS Siri Audio Commentary Tutorial and take the HTML and Javascript code you could call splice_audio.html from today, and go and make your own web application? Now, is there? Huh?
Another day, another deliberation about delimitation! Yes, as a programmer, of the mere mortal variety, and you ask a bit of your engaged users, you’ll not have much chance of …
changing hardware
changing firmware
changing environment … very much
… achieving your ends … ngah ha ha!
But you have got delimitation on your side.
Down @ …
Wait for ! to finish first …
We do this type of work a lot, and recommend …
get firm idea in the mind of what you want to achieve (yesterday and today, it being the “midstream” ability to change YouTube video ID references using our inhouse YouTube video interfacer) … including …
what might happen into the future … and allow for flexibility should a new piece of functionality happen into the future … and …
the work is often behind the scenes, and the user not interested in odd arrangements, so flag with title hovering (non-mobile) and/or placeholder (non-mobile or mobile) textbox flagging of your preferred usage … though …
behind the scenes, where possible, allow for more flexibility, regarding which delimiter makes the functionality happen, should the user forget, and flounder
generally speaking textbox static HTML like …
<input placeholder="Can | separate time to next YouTube video ID (use ; for just audio)" style="width:400px;" onblur="checkval(this);" type="text" onmouseover="toms(this);" id="i1" name="i1" value="" title="0:00:00">
…
but reworked for the first such textbox (and no, using CSS for this is not recommended) using Javascript …
setTimeout(function(){ if (document.getElementById('i0')) { document.getElementById('i0').placeholder='Can ; separate time to flag Just Audio'; } }, 4000);
… to help inform the user of what is possible … ngah ha ha
No | … wait … ~!@#$%%%#@#$^& … ouch
Proof of the delimitation pudding here, we think, is that we could move off yesterday’s …
blast from the past version
… working methodically through the issues, making the current version better too. You will find web browser Web Inspectors invaluable here. We used a “suck it and see” approach to making changes and saw where the Web Inspector took us regarding errors we caused, and fixed those, so that we could apply the better logic to the most recent version of our inhouse YouTube video interfacer.
text to send to Google Translate … is bolstered today via a new modus operandum …
via YouTube video ID 11 character code play a YouTube video
… feeding into our YouTube API work, but resurrecting an inhouse “blast from the past version” close to being okay, on non-mobile, changing YouTube video IDs “midstream” and keeping the video playing work all within the one webpage window. Am sure we had ideas like this in mind using the “Splicing” word in the web application title. We’ve always been a bit obsessed with those train announcements piecing a message together, especially when it comes to “number words” via a combination of audio media snippets.
Also, today, with limited practical success, we’re trying to allow a user to loop through their media list, repeating the playing. There are that many ways we can get interrupted achieving this, it’s not funny … really … but we’re not getting upset because the user can go back to the source window and reclick buttons of their choice to keep the good times rolling … ah, MacArthur Park … again!
If you’ve been following yesterday’s Spliced Audio/Video Styling Tutorial Spliced Media synchronized play project of recent times, you’ll probably guess what our “project word” would be, that being …
duration
… as a “measure” of importance to help with the sequential play of media, out of …
audio
video
image
… choices of “media category” we’re offering in this project. But, what “duration” applies to image choice above? Well, we just hardcode 5 seconds for …
(non-animated) JPEG or PNG or GIF … but …
animated GIFs have a one cycle through duration …
… that we want to help calculate for the user and show in that relevant “end of” timing textbox. Luckily, we’ve researched this in the past, but every scenario is that bit different, we find, and so here is the Javascript for what we’re using …
function prefetch(whatgifmaybe) { // thanks to https://stackoverflow.com/questions/69564118/how-to-get-xxduration-of-gif-image-in-javascript#:~:text=Mainly%20use%20parseGIF()%20%2C%20then,xxduration%20of%20a%20GIF%20image.
if ((whatgifmaybe.toLowerCase().trim().split('#')[0].replace('/gif;', '/gif?;') + '?').indexOf('.gif?') != -1 && lastgifpreq != whatgifmaybe.split('?')[0]) {
lastgifpreq=whatgifmaybe.split('?')[0];
if (whatgifmaybe.indexOf('/tmp/') != -1) {
lastgifurl='/tmp/' + whatgifmaybe.split('/tmp/')[1];
} else {
lastgifurl='';
}
document.body.style.cursor='progress';
whatgifmaybe=whatgifmaybe.split('?')[0];
//alert('whatgifmaybe=' + whatgifmaybe);
Today, we start concertinaing multiple image asks (and yet show image order), using our favourite “reveal” tool, the details/summary HTML element dynamic duo, so that there are less images disappearing “below the fold” happening this way in …
We have “bad hair days”, but that doesn’t stop us seeking “styling days”. Yes, we often separate CSS styling into an issue that is addressed only if we deem the project warrants it, and we’ve decided this latest Spliced Audio/Video web application project is worth it. This means that further to yesterday’s Spliced Audio/Video Browsing Data URL Tutorial we have some new CSS styling in today’s work …
… into (in the second phase of it’s existence within an execution run) a …
linear gradient inspired progress bar … the secret to the “hard stops” we got great new advice from this link, thanks … to come up with dynamic CSS styling via Javascript …
function dstyleit(what) {
var oney=what.split('yellow ')[1].split('%')[0] + '%,';
what=what.replace(oney,oney + oney);
document.getElementById('dstyle').innerHTML+='<style> #subis { background: ' + what + ' } </style>';
return what;
}
browsing for local files off the client operating system environment … and …
a genericized “guise” of web server media files … and …
client File API blob or canvas content media representations
… a point of commonality, and as far as we are concerned, we are always looking to these days, whenever we can, data size permitting, to do away with static web server references, open to so many mixed content and privilege issues.
Perhaps, that is the major reason, these days, we’ve taken more and more to hashtag based means of communication via “a” “mailto:” (email) or “sms:” (SMS) links.
Today proved to us that that, albeit most flexible of all, clientside hashtag approach has its data limitations, that PHP serverside form method=POST does better at, as far as accomodating large amounts of data.
And so, yesterday, with our Splicing Audio or Video inhouse web application, turning more towards data URLs to solve issues, we started the day …
trying hashtag clientside based navigations … but ran into data size issues, so …
introduced into our Splicing Audio or Video inhouse web application, for the first time, PHP serverside involvement …
… that made things start working for us better getting the synchronized play of our user entered audio or video media items performing better.
And so, further to yesterday’s Spliced Audio/Video Data URL Tutorial we added browsing for local media files as a new option for user input, by calling …
function ahere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio here');
//alert('audio here ' + document.getElementById('anaudio').duration);
}
}
function alere(evt) {
if (document.getElementById('sanaudio').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anaudio').duration));
//alert('audio Here');
}
}
function vhere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video here');
//alert('video here ' + document.getElementById('anvideo').duration);
}
}
function vlere(evt) {
if (document.getElementById('sanvideo').src.trim() != '') {
document.getElementById(durtoid).value=Math.ceil(eval('' + document.getElementById('anvideo').duration));
//alert('video Here');
}
}
This works if you can fill in the src attribute of the relevant subelement source element with a suitable data URL (we used the changeddo_away_with_the_boring_bits.php helping PHP to derive). From there, in that event logic an [element].duration is there to help fill out those end of play textboxes in a more automated fashion for the user that wants to use this new functionality, as they fill out the Spliced Media form presented.
Today we’ve written a third draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Spliced Audio/Video/Image Overlay Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
… all like yesterday, but this time we allow you to “seek” or position yourself within the audio and/or video media. We still all “fit” this into GET parameter usage. Are you thinking we are a tad lazy with this approach? Well, perhaps a little, but it also means you can do this job just using clientside HTML and Javascript, without having to involve any serverside code like PHP, and in this day and age, people are much keener on this “just clientside” or “just client looking, plus, perhaps, Javascript serverside code” (ala Node.js) or perhaps “Javascript clientside client code, plus Ajax methodologies”. In any case, it does simplify design to not have to involve a serverside language like PHP … but please don’t think we do not encourage you to learn a serverside language like PHP.
While we are at it here, we continue to think about the mobile device unfriendliness with our current web application, it being, these days, that the setting of the autoplay property for a media object is frowned upon regarding these mobile devices … for reasons of “runaway” unknown charge issues as you can read at this useful link … thanks … and where they quote from Apple …
“Apple has made the decision to disable the automatic playing of video on iOS devices, through both script and attribute implementations.
In Safari, on iOS (for all devices, including iPad), where the user may be on a cellular network and be charged per data unit, preload and auto-play are disabled. No data is loaded until the user initiates it.” – Apple documentation.
A link we’d like to thank regarding the new “seek” or media positioning functionality is this one … thanks.
Also, today, for that sense of symmetry, we start to create the Audio objects from now on using …
document.createElement("AUDIO");
… as this acts the same as new Audio() to the best of our testing.
For your own testing purposes, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. For today’s cake “prepared before the program” we’ve again channelled the GoToMeeting Primer Tutorial which had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, but only seconds 23 through to 47 of the video should play, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic, and hope we can improve mobile device functionality.
Today we’ve written a second draft of an HTML and Javascript web application that splices up to nine bits of audio or video or image input together, building on the previous Splicing Audio Primer Tutorial as shown below, here, and that can take any of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
video
image … and background image for webpage
… for either of the modes of use, that being …
discrete … or “Optional”
synchronized … or “Overlay”
The major new change here, apart from the ability to play two media files at once in our synchronized (or “overlayed”) way, is the additional functionality for Video, and we proceeded thinking there’d be an Javascript DOM OOPy method like … var xv = new Video(); … to allow for this, but found out from this useful link … thanks … that an alternative approach for Video object creation, on the fly, is …
var xv = document.createElement("VIDEO");
… curiously. And it took us a while to tweak to the idea that to have a “display home” for the video on the webpage we needed to …
document.body.appendChild(xv);
… which means you need to take care of any HTML form data already filled in, that isn’t that form’s default, when you effectively “refresh” the webpage like this. Essentially though, media on the fly is a modern approach possible fairly easily with just clientside code. Cute, huh?!
Of course, what we still miss here, is the upload from a local place onto the web server, here at RJM Programming, capability, which we may consider in future, and that some of those other synchronization of media themed blog postings of the past, which you may want to read more, for this type of approach.
In the meantime, if you know of some media URLs to try, please feel free to try the “overlay” of media ideas inherent in today’s splice_audio.htmlive run. We’ve thought of this one. Do you remember how the GoToMeeting Primer Tutorial had separate audio (albeit very short … sorry … but you get the gist) and video … well, below, you can click on the image to hear the presentation with audio and video synchronized, and the presentation ending with the image below …
We think, though, that we will be back regarding this interesting topic.
Today we’ve written a first draft of an HTML and Javascript web application that splices up to nine bits of audio input together that can take either of the forms …
audio file … and less user friendly is …
text that gets turned into speech via Google Translate (and user induced Text to Speech functionality), but needs your button presses
Do you remember, perhaps, when we did a series of blog posts regarding the YouTube API, that finished, so far, with YouTube API Iframe Synchronicity Resizing Tutorial? Well, a lot of what we do today is doing similar sorts of functionalities but just for Audio objects in HTML5. For help on this we’d like to thank this great link. So rather than have HTML audio elements in our HTML, as we first shaped to do, we’ve taken the great advice from this link, and gone all Javascript DOM OOPy on the task, to splice audio media together.
There were three thought patterns going on here for me.
The first was a simulation of those Sydney train public announcements where the timbre of the voice differs a bit between when they say “Platform” and the “6” (or whatever platform it is) that follows. This is pretty obviously computer audio “bits” strung together … and wanted to get somewhere towards that capability.
The second one relates to presentation ideas following up on that “onmouseover” Siri audio enhanced presentation we did at Apple iOS Siri Audio Commentary Tutorial. Well, we think we can do something related to that here, and we’ve prepared this cake audio presentation here, for us, in advance … really, there’s no need for thanks.
The third concerns our eternal media file synchronization quests here at this blog that you may find of interest we hope, here.
Also of interest over time has been the Google Translate Text to Speech functionality that used to be very open, and we now only use around here in an interactive “user clicks” way … but we still use it, because it is very useful, so, thanks. But trying to get this method working for “Platform” and “6” without a yawning gap in between ruins the spontaneity and fun somehow, but there’s nothing stopping you making your own audio files yourself as we did in that Siri tutorial called Apple iOS Siri Audio Commentary Tutorial and take the HTML and Javascript code you could call splice_audio.html from today, and go and make your own web application? Now, is there? Huh?