Angularjs Subtitles

I watch a lot of Scandinavian video files, which means a lot of subtitles. Being a fussy sort of a fellow, I do get irked by badly timed and badly typed subtitles, so for some months have been contemplating writing a subtitle editor. I’ve combined that urge with my curiosity about AngularJS. I have been irritated by JavaScript since version 1.0, and have so far been unimpressed with everything but MooTools. Yet the weight of Google, and its usually intelligent solutions, is encouraging.

Starting from scratch it took about eight hours to create an SRT viewer from scratch in ‘the Angular way.’  That has involved very little new ideas — a blurry MVC, more of an MVP; mustach-style templates that mix presentation, backbone views — but some nice dependency injection, and a huge pile of patterns to choose from (which will probably be a PITA, if the TMTOWTDI motto of Perl is anything to go by).

Eight hours to produce an SRT viewer, which is a task I’d usually accomplish in two hours.

I am hoping the investment in time will pay a return now that I set to work on the editing controls. Currently, Angular’s two-way data-binding allows me to edit subtitles — moving them as a mass, and adding new entries, is coming up.

One thing I do find encouraging is the community, which is very helpful, and not snooty like the Perl community, who seem to be slowly drifting away in the Perl6 pipe smoke. Angular’s API seems to change quite often (not a bad thing), and these changes are regularly pointed out on Stack Overflow.

One thing that really annoys me is the need to $scope.$apply — so hard to avoid.

What really pisses me off, though, is that the tutorial is not up to date with the tools. There is little more off-putting than starting a tutorial and having to reconfigure everything. Surely Google have the resources…?

What’s the alternative?

But the two libraries do things very differently. Ractive.js is designed to have as few surprises as possible – no mysterious $scope.$apply() or dependency injection minification headaches! You can master Ractive.js in an hour or two, just by following the interactive tutorials.

So claims Reactive.js — and it has to be worth a look.

JavaScript and Perl Object-orientation: a small comparison

I began programming Perl in 1997, as my university did not
run Netscape Live Server – the original server-side JavaScript –
and I wanted to access to a database.

So many years later, it looks as if JavaScript is back in
a serious way: despite limitations in the implimentations
of ECMA Script, the standard of which JS is now a subset,
the language is more powerful than the average Perl
developer is lead to believe.

Perl programmers tend to frown on JS as a toy language,
the way C++ devlopers look down on Perl developers.
Yet Perl and JavaScript have an awful lot in common:
both are easy to exten, bothd in their own syntax and
in the C; both are regularly extended with library
frameworks (Moose, MooTools).

Whilst both have a slightly unusual approach to OO,
the main conceptual difference is that JavaScript is
OO to a much great extent than Perl. For example, whereas
Perl has a length() function, JS objects have a length() method.
More interestingly, JavaScript’s object orientation is
prototype-based – there are no classes, only objects and
object instances, based on functions that operate upon a
built-in “this” variable, and which are used to generate
instances of custom objects when called with the “new” keyword.


This page contains JavaScript which
is documented, in comparison with Perl:
A comparison between
common OO features of Perl and JavaScript

The Problem with MooTools…

MooTools is without doubt the best JavaScript extension around, providing clean and simple object orientation features such as classes, inheritance, pseudo-events, and more. Yet advertised JS programming roles rarely require this skill, whereas the much more limited, and much less clear, jQuery is almost always required.

jQuery’s primacy is not because it is a ‘better’ system: it doesn’t even have a class model, and class-like features have to be hacked-together in a non-standard manner. jQuery sells because it looks good, and has a huge number of reliable plug-ins.

I recently had call to produce a live data grid, reflecting a database table, and allowing users to make updates to the database table by editing the table on the screen. I found a jQuery plugin within five minutes, and had the code up and running in 15: all credit to jQuery and the plug-in author.

Doing this in MooTools can take all day, though the code produced is easier to maintain and update with new features.

Part of the problem is that MooTools is more compartmentalised. A recent example I came across was trying to produce charts from tabulated data. The wonderful MilkChart library for MooTools,  by Brett Dixon, produces beautiful business-ready charts from HTML tables – but not from MooTool’s Table.Sort or Table.Paginate tables. It only took ten minute this morning to produce a GitHub fork of the project, updated to support these core MooTools features, but it seems curious that such elementary comparability has been over-looked for so long. Perhaps I can gauge the health of MooTools by the state of this fork’s pull request.

JavaScript Scrolling Game

The JavaScript ‘Defender’ game proved too complicated for the kids, so a simplified version is the Illy Game.

Seems too simple to me, but three- to seven-year olds seem to enjoy it – and I’ve seen versions, with bigger and friendlier graphics – used on the BBC’s childrens’ site, CBeebies.

Diary of a JavaScript Pacman

In about 1998, I wrote a version of Pacman, called Paxman, for Netscape JavaScript.

After recently spent a few days writing a simple Defender game with the MooTools JavaScript extension, I thought it would be interesting to see how long Pacman would take.


Leave Sprite as it is, extend for multiple directions later

Tend Jake

Resume. Pacman image. Make coffee whilst Photoshop starts, just like the old days.


Tend Jake.

Implement grid model in Sprite.render

Spent half an hour debugging because I forgot to set the canvas width/height. Moving on to set the map array from a string

Have a pacman moving on the canvas. Start collision detection based on grid

Must remember to impose the maze graphic in the canvas’ parent layer, and make canvas transparent like in BrushMess.js – much faster than rendering a map image per frame

Play with Jake

Jake wants to watch a Thomas review. Back to work. Start collision detection now that refactoring-introduced bug is fixed

Create a ghost class.

Ghost caught player. Need mroe maze detail.

Big map, will render as graphics for dev. Play with Jake.

Back – render map for dev.

Sprites move in maze. Wrap off-screen moves

All is fine except that the sprites do not follow the grid cleanly

Damn. Will come back to this. Eat pills.

Added pills to the map canvas, which is now a field. Changed map to allow pills to be included, and have the main controller remove pills and increment score when player eats them. Add score display.

Changed .element to .elements hash to accept more screen elements in init.

Spent some time on ghost logic. Still only one ghost, but should be simple to extend. Illy home from school, so play with the kids – end of day.

Day 2

Start work – finish ghost logic

Collision logic only called if grid cell changed. Have ghost eat pacman.

Ghost eat pacman, lives displayed, game-over logic done. Next: coffee, more ghosts (later individualise ghost logic for ‘personalities’) and then power pills and ghost run-away mode.

Back. Multiple ghosts requires delayed start.

Back to power-pills changing ghost mode from chase to runaway.

This isn’t as straight-forward as I’d thought, so I’m writing it out in long-hand and will optimize later.

Could update the ghost spites to use different graphics based on mode… Could check all pills are eaten, and reset the map/ghosts/player so ghosts are faster, which means slowing ghosts down in the beginning. Will look at later, so the game can be played.

Done, and also made the ghosts’ initial direction reflect player position. Now make sprites reflect direction of movement.

Invert direction of travel as soon as powerpill is eaten.

Done. Time for lunch. Not bad progress, I suppose: a playable Pacman game in less than eight billable work hours, though there remains work to be done: perhaps an hour or two on ghost personalities via sub-classing the Pacman.Ghost object, and there is still outstanding the bug that occasionally makes it seem that the player sprite is travelling inside the wall.

Added sound for level start, which meant wrapping up some parts into an funciton on whic to call .delay (ie setTimeOut). Just remembered Pacman needs to eat ghosts. Can I be bothered?

Day 3

I was a bit slack yesterday, and went back to L-System sound generation, so today I updated Pacman so that the ghost logic is a bit more efficient, using Euclidean distance, and allowed Pacman to eat the ghosts. I should update the graphics so the ghosts’ eyes point in the direction they are going, but it is a tedious job. I should also allow the ghosts to turn blue when ghost.mode==’runaway’, but again, a bit tedious. And, also tedious, I should put the maze graphic on the under-layer canvas, and remove the ugly blue blocks. But the kids find the game hilarious as it is, so…


The code is all at Please fix the bugs and send me the results. Thanks.


On-the-fly Audio PCM Waveform Visualisation

This code produces playable PCM waveform graphs from any sound file supported by your WebAudio API-compatible browser. That’s not Mozilla.

As the sound is played, the graphed waveform fills with a colour of your choice.

The code can be called via a JS API, but is designed to be called via HTML mark-up, taking its foreground and background colours from the container element that contains a specific CSS class.

Full details on the MooTools Forge / GitHub pages, listed below.


The code below is MooTools based to the extent possible – this allows for quick and clear access to element styles, for example.

However, MooTools’ Request.js does not yet allow access to the XHR Level 2 responseType field. I have updated the core code and issued a pull request, we’ll see what happens. The potential issues are that my patch does not attempt to address any other aspect of XHR Level 2, than the issue at hand, and does not attempt to address what has now become an underlying problem of the Request class, that it is text-centric.




Creating Simple e-Cards with Dynamic Text Wrapping

someecards allows anyone to create a retro-styled electronic greeting-card, and share it on Facebook or anywhere else.

The cards crop up all over my Facebook news feed, and it seems obvious that if Facebook don’t buy them soon, somebody else will – after all, Facebook posts with images always float to the top. is a great little site, but the card-creation interface is more flash than intuitive, so I spent a few days correcting it and ended-up with My images are cheaply ripped-off the internet, but I think my HTML interface is easier to use than their Flash interface, and I think my text-wrapping is better, too.

DIY dynamic text wrapping

The USP of the app is its ability to wrap text into and around an image. This could be done in real-time, by reading pixels on the right of the caret, or by using a map into which the text can flow.

Since I didn’t wish to limit the app to HTML5-enabled browsers, I avoided pixel reading in JavaScript, I opted for the latter, reducing the image in height to reflect the number of lines of text that would be used in the GUI, relying upon the image-manipulation library to re-sample the image such that I could read surviving pixels either side of a variable threshold, and store the data in a JSON file for the GUI to access.

Once the GUI has this map, it is simply a matter of overlaying an image on a suitably-sized textarea. For every keyup event fired by that element, the text is re-flowed by replacing all line-feeds/carriage-returns with spaces, and adding to the text element a word at a time, until the server-generated map specifies that maximum line-length has been reached. Line length is specified in pixels, and word-length is measured by rending the word in an off-screen element, and measuring the width to which that element grows.

When complete, the whole is then sent back to Perl for rendering as a static image.

 On the other hand…

That was a couple of months ago, and having avoided HTML5 on the project, it seems inevitable to extend the code to make the most of HTML5 – if only because I can, and it is more fun than Perl.

When writing a version of Defender in HTML5, I found the context.getImageData() method to be extremely fast, and certainly practice for a task such as this. Combined with the ability to render images and text to the canvas, and then have the user save the canvas – or even send its data to the server for rendering as a static png – the only question I can’t answer is how to find VC with marketing enough to buy and sell these goodies.


The application.

HTML5 PCM Waveform Visualisation

The oud riff was recorded at home through an 01X, visualised by some JavaScript code knocked-up in an hour one Monday morning.

One of the many wonderful features of the HTML5 Web Audio API is the ability to access the PCM data of an audio file. Combined with the speed of contemporary machines, this enables an HTML5 web page running in WAA-compatible browser to display the waveform of the playing file in real time – and it is really, really easy to use.

Access to the PCM data is via an ‘analyser’, which is inserted into the audio node chain, and provides an array of floats or unsigned integers for each audio ‘frame.’ The API allows the user to specify the number of samples to return in a frame, and for the purpose of this illustrative example, the default 2048 was replaced with 512, which is plenty for the non-professional eye, and allows for a relatively fast canvas refresh rate.

Access this data at regular intervals, render it to a canvas, and you’re done.

To make things a little more interesting, I thought to maintain a stack of the past n PCM waves, and render them disappearing into the distance. My maths skills are not advanced, and I spent some time messing around with scaling and translation factors applied directly to the stored co-ordinates, before smacking my head for my own stupidity when remembering that the canvas context supplies ‘scale’ and ‘translate’ functions to do this: for each generation in the frame stack, the origin is shifted up, and the scale is decreased, to create a perspective effect.


Deep joy – requestAnimationFrame actually works, so finally setTimeout/.periodical is ditched. I found this shim, and slotted it into place within seconds:

window.requestAnimFrame = (function(){
  return  window.requestAnimationFrame       || 
		  window.webkitRequestAnimationFrame || 
		  window.mozRequestAnimationFrame    || 
		  window.oRequestAnimationFrame      || 
		  window.msRequestAnimationFrame     || 
		  function( callback ){
			window.setTimeout(callback, 1000 / 60);

Frankly, I see no performance difference, but I am excited the possibilities for games programming.


When dabbling with the scaling factors, I was tempted to shift to WebGL. I may try this next, using three.js. May also render an opaque polygon between generations, to create mountains. Or maybe I’ll just increase the number of generations – and hence their visual density – until the fan gets too loud.


Plonk – Like Plink, in two and a half mornings


When looking for something to impplement using HTML5 Web Sockets,
other than a chat server, I cam across , by the Swedish compnay,
DinahMoe, and so ripped-off the idea.

When watching Plink, it seemed obvious to implement it as … a
chat server, in as much a number of users are linked to a
persistent-state server, that relays each users messages to all other
users, in real time. The interesting thing about Plink is that the
messages are musical sounds.

Monday: Relay Server

The first task was to write or find a
Web Socket relay server. Node.js seemed the obvious choice, but
support for Web Sockets in Node.js is patchy, and as I wanted to play
with the latest version of the specification, I had only the
web-socket library to look at. Unfortunately, this failed to build on
OS X, and the support response was ‘your need Xcode,’ which I have
had since buying the Mac.

I thought of trolling through the Maven
repositories for something suitable, but on my way there checked what
Perl had to offer. After getting side-tracked by the usual Perl
half-baked or badly documented implementations, I found that
Mojolicious has a simple WS daemon that can run out of the box. It
took literally five minutes to install the package and have the relay
up and running – a fraction of the time it took to find the

use strict;
use warnings;

use Mojolicious::Lite;
# use Data::Dumper;

my $clients_tx = {};
my $clients_cursors = {};

websocket '/' => sub {
	my $self = shift;
	my $client_id = sprintf "%s",$self->tx;
	$clients_tx->{$client_id} = { tx =>$self->tx };

	$self->on(message => sub {
		my ($self, $msg) = @_;
		# warn $msg;
		my ($x, $y, $pitch) = split/,/, $msg;
		if (defined $y){
			$clients_cursors->{ $client_id }->{xy} = [$x+0, $y+0];
			$clients_cursors->{ $client_id }->{pitch} = $pitch ne ''? $pitch+0 : undef;
			# warn Dumper( $clients_cursors );
			for my $i (keys %$clients_tx) {
						cursors => $clients_cursors

	$self->on(finish => sub {
		delete $clients_tx->{$client_id};
		delete $clients_cursors->{$client_id};


The code simply receives a CSV of three
numbers representing the cursor position and selected pitch, which it
stores for each connected client. Every time a client sends this
information, the server responds with the latest copy of the
information for all clients.

Prototype 1



It’s not easy to take a screenshot whilst playing with two mice.

The first stage of the prototype was boiler plate – setting up
an Apache virtual server, creating directories to hold JS and CSS,
and a basic HTML5 document to pull in Mootools, Modernizer, the blank
application CSS file, and a fresh Mootools class definition, the
latter linked to an element in the body of the page into which the
app could insert itelf.


Capturing mouse movement events is straight forward, and once
their offsets are removed, I was ready to send the co-ordinates to
the server. When jotting down the server code, above, I had already
decided to send both co-ordinates and pitch information, as I wasn’t
quite sure how I would implement the functionality of the app, and
wanted to leave my options open. I’ve been a victim of premature
optimisation in the past.

Next I had to find the Web Socket API. There are many examples of
this lying around the net, and it took only a minute to implement the
necessary callbacks, and have the server logging connections and
messages, and the client console logging what it was sending and

After adding to my websocket .message callback some code to render
a circle at the co-ordinates specified in the message, I could see a
mess under my cursor whenever I moved it.

Next came my only implementation error – at least, the only one
I have noticed.

Plink shows the current note the user is playing as a circle on
the screen, but also shows past notes as scrolling horizontally away
from the cursor: very pretty. I remember implementing horizontal
scrolling for games on a Vic-20 in the early 1980s, and the fastest
way of doing it then was to shift a block of memory by the number of
bytes to be scrolled, an operation easily and quickly done even then,
by a ten-year old.

HTML5 is not so simple, and the canvas element still lacks a
built-in scrolling function. This left four choices: two involved
dropping the canvas, in favour of either SVG or WebGL; I’ve played
with both, and found basic operation in both to be similarly
straight-forward: in both cases, I would for every ‘note’ create an
object that could be rendered. The other option was to do this for
the canvas, and wipe it between every rendering, or to attempt to
copy the canvas element, wipe the original, and render the copy one
pixel to the left.

My thoughts were that the latter would be the simplest, and
closest to the pattern I had learnt as a kid.

scroll: function(){

var destinationCanvas = this.canvas.clone()

destinationCanvas.cloneEvents( this.canvas, 'mousemove');

var destCtx = destinationCanvas.getContext('2d');






destinationCanvas.replaces( this.canvas );


this.canvas = destinationCanvas;

this.ctx = destCtx;


I hooked the above method up to a timer, via Mootools .periodial
method, and was mildly pleased to see a trail from cursor,
disappearing off the edge of my screen.

Prototype 2

Before the day ended, I wanted to hear sounds, and as I browsed
for API notes and implementation examples, I noticed my fan buzzing,
and Mozilla getting sluggish. I flicked, slowly, through my tabs, and
found the Plink rip-off crawling. I killed the server, closed the
tab, and everything was fine. I still don’t know exactly what the
problem was: I tried minimising the rate messages were sent to the
server, and the frequency of canvas renderings, but in the end went
to have tea, and gave up for the day.


For some reason I felt an attachment to the canvas copying method,
but dropped it all the same, and had the .message handler push the
latest batch of JSON objects onto a stack. I then rewrote the
periodical scroll method to render everything on the stack, making
the latest object appear on the right, decrementing everything by one
cursor width, and dropping from the stack anything that would fall
the screen. This removed the performance problem, and reinforced my
desire for a native canvas.scroll method.

Web Audio API

In my experiments writing an audio sequencer for SoundCloud files,
I learn the limitations of the HTML5 audio element – it is to the
Audio API as the video element is to the Web Graphics Library. So, I
found a decent introduction to the Audio API, on HTML5Doctor, and had
sound within a few minutes, hooked up in a new periodical method. I
had the .send method calculate pitch by computing within which of a
number zones the vertical position of the mouse fell, and then spent
an hour find sounds in Logic, and exporting individual notes as wav
files, loaded into buffers stored in arrays, whose indexes could be
accessed by the ‘pitch’ index sent to the server.

Plink has a percussion track, so I extended the sound-firing
method to play the drums as appropriate:

options: {
percussion: {
kick: [[4,1]],
snare: [[8,1], [16,2]],
hat_closed: []


self.percNames.each( function(instrument){

self.options.percussion[instrument].each( function(i){

console.log( instrument +' '+ self.pulseNumber +' '+ i[0] +' '+i[1]);

if (self.pulseNumber % i[0] == i[1]){

self.playNow( self.percBuffers[instrument] );

self.scaleCursor += self.options.cursorScaleIncrement;




Notice that routine also changes the cursor size, as in Plink, so
the visual echo of the sound reflects the unerlying rhythm.

My children then took over the development machine for beta
testing, and I joined in on the old media centre PC, on which I had
to install Chrome, which seems to be the only limitation of the
project, other than my inability to find a free host for my
web-socket relay code: my current ISP is too cheap to provide this.

I was quite pleased with the effort, and dropped a line to the
address Dinahmoe publicised for job applicants, but no word yet:
perhaps they realise Plink isn’t that hard to do. Whilst I was there
I had a quite look at the Plink source code, and was only mildly
horrified, mainly by the amount of hard-coding, lack of framework,
and lack of white-space.


I then added the ability to change patches, as in Plink, as well
as some options to have the horizontal cursor position equate to
volume and panning – two options the children found far from
intuitive, and quite intrusive into their play.

Also added the ability for patches to only sound if pre-required,
much in the manner of the percussion track.


Node.js is beautiful: clearer and more natural than Perl,
faster to write and install than Java, and performs as well
as both. The following Node.js websocket server took ten minutes
to write, based of the stub code that came with the module. The
only change made to the Plond.js client was to add a sub-protocol
argument to the WebSocket instantiation.

#!/usr/bin/env node
var WebSocketServer = require('websocket').server;
var http = require('http');

var cursors = {};

var server = http.createServer(function(request, response) {
    console.log((new Date()) + ' Received request for ' + request.url);
server.listen(3000, function() {
    console.log((new Date()) + ' Server is listening on port 3000');

wsServer = new WebSocketServer({
    httpServer: server,
    autoAcceptConnections: false,
    maxReceivedFrameSize: 50

function originIsAllowed(origin) {
  // put logic here to detect whether the specified origin is allowed.
  return true;

wsServer.on('request', function(request) {
    if (!originIsAllowed(request.origin)) {
      // Make sure we only accept requests from an allowed origin
      console.log((new Date()) + ' Connection from origin ' + request.origin + ' rejected.');

    var connection = request.accept('sec-websocket-protocol', request.origin);
    console.log((new Date()) + ' Connection accepted.');

	var id =

    connection.on('message', function(message) {
    		if (message.type === 'utf8') {
            // console.log('Received Message: ' + message.utf8Data);

            var csv = message.utf8Data.split(',');
            cursors[id] = {
            		userId: 		 csv[0],
            		xy: 			 [ parseInt(csv[1]), parseInt(csv[2])],
            		scaleCursor: parseInt(csv[3]),
            		gain:		 parseInt(csv[4]),
            		pain:		 parseInt(csv[5]),
            		patch:		 parseInt(csv[6]),
            		pitch:		 parseInt(csv[7]),
            connection.sendUTF(JSON.stringify({ cursors: cursors} ));
        else if (message.type === 'binary') {
            console.warn('Ignoring received Binary Message of ' + message.binaryData.length + ' bytes');

    connection.on('close', function(reasonCode, description) {
        console.log((new Date()) + ' Peer ' + connection.remoteAddress + ' disconnected.');
    		delete cursors[id];

Finally, I moved the patches into objects, along with their individual pulses,
for ease of legibility and maintenance.

In hindisght, the cross-platform features of MooTools were unncessary,
as the code will only run where Web Audio is available,
which seems to be only Safari and Chrome. Still, I think it does allow
for more readable code, and has no performance impact (unless it was
responsible for that canvas copying slow-down).


  • Have the event loops execute in separate WebWorker threads – as yet, I’ve
    no idea what the threading model is, but I expect it to be as
    frustrating as threading in Perl.
  • Maybe switch to WebGl, just for fun.
  • Find a way of playing like this for money.



Duplo railway parts are not like the old HO/OO Hornby parts: not only do they not hold themselves together as well, but the angles in the cross-pieces and points are different, which makes it very difficult for me to find a decent layout that will fit on the rug.

Occurs to me that the best solution could evolve to fit the functions of the available space and contigious track, but is a genetic algorithm for Duplo really a good use of my time?

Perhaps if the processing time could be kept within a few seconds, Lego may put the application on their website?