Images on the web have been a solved problem for many years. Web designers know they can insert a JPEG, PNG, or GIF file with <img>
and it will work in virtually all browsers. Video, on the other hand, is much more difficult to provide. There are a range of formats and none are universally supported across browsers. Inserting video should be as easy as inserting an image: just make sure the video is in the right format and use <video>
to place it. While we’re not there yet, such a reality could be closer than you think.
Continue reading ‘Seamless web video within reach’
The DMCA §1201 hearings for 2009 took place at the beginning of May. These hearings will guide the anti-circumvention exemption rulemaking that happens every three years. It is important that people in the US are aware of these because the exemptions influence whether you can legally watch DVDs with free software, whether you can make fair use of online videos, and whether you can unlock or jailbreak your cell phone (see also the TPMs section of my recent talk). It is important for people outside the US as well because of the major influence US policy has on the laws introduced in other countries, especially close trading partners like Canada.
The transcripts for the first two of these hearings are now available. Though having a text format is nice, I find it very difficult to read. That’s why I created this HTML version for the first transcript:
DMCA Section 1201 Hearing – May 1, 2009
Highlights include the §1201(11)(A) discussion on fair use of DVD content for remixing and the §1201(5)(A) discussion on iPhone jailbreaking.
This version uses <cite>
, <blockquote>
, and microformats to add meaning to the transcript. Along with a style sheet from Stephen Paul Weber, this version of the transcript is much easier to read. And in HTML form, it’s much easier to add a custom style sheet to fit your own viewing preferences. All changes to the transcript, including the style sheet, are licensed under the Creative Commons Attribution 3.0 Unported license and are Copyright © 2009 Stephen Paul Weber or Denver Gingerich.
I hope this version makes the hearing transcripts more accessible for people. It didn’t take too long to translate the first hearing transcript into HTML so if there is interest, I will translate the others as they become available. The conversion was done largely programmatically so there may be errors. I’ve made an effort to ensure that the quotes match up with the speakers, but it’s possible that some quotes don’t match. Please use the original transcript as the definitive source.
On May 8, Adobe submitted a takedown notice to SourceForge.net requesting that the rtmpdump project be removed from their site. SourceForge.net removed the project this past week. For more details, see the original Slashdot post, an updated Slashdot post, and a new post from Linuxcentre.
Reading deeper into the takedown notice, we see that Adobe believes rtmpdump “can be used to download copyrighted works” and lists some pages on Channel 4 as examples. The takedown notice also states that Adobe “is the developer of technological protection measures that protect content from unauthorized copying and distribution”. This suggests rtmpdump was targeted because it circumvents technological protection measures. A post on the XBMC forum confirms that Channel 4 uses RTMPE, an encrypted version of RTMP, used to transmit video with Flash. The post also links to a Replay Media Catcher page discussing how Adobe forced them to remove RTMPE support. Though the takedown notice doesn’t state it explicitly, we can be fairly sure from these points that Adobe is targeting rtmpdump because it allows you to download content transmitted using RTMPE.
The major implication of this takedown notice is that Adobe has definitively told us that a fully-compliant free software Flash player is illegal. This is because RTMPE is part of Flash, circumventing RTMPE is illegal (in the US at least), and Adobe will never give a key to a free software project since they cannot hide the key. As a result, Flash cannot truly be a standard even if we ignore the codec patent problems.
Adobe’s takedown of rtmpdump reminds us that Adobe does not fully support open standards. As a result, web designers and anyone else who cares about an open web should steer clear of Adobe technologies, in particular Flash. Adobe was given the choice of supporting open standards or appeasing big media and they chose big media. Make no mistake, Adobe is an enemy of the open web.
Update (2009-05-20): 1080p and standard definition videos of the talk in Theora/Vorbis are now available. See below for details.
I presented a talk entitled “DVDs, MP3s, YouTube, and other hindrances to free software” (abstract) today at FOSSLC’s Summercamp 2009 (#fosslcSC09) in Ottawa. Here are the slides:
- Slides made with S5. To switch to a scrollable view, click the 0̸.
- ZIP file of slides, including all CSS and JavaScript (29617 bytes)
Here are the videos (all videos are Copyright © 2009 FOSSLC, licensed under Creative Commons Attribution-Share Alike 2.5 Canada):
- Videos recorded on JVC Everio GZ-HD40U video camera (high-quality; recording was started a few seconds into the talk):
- Ogg Theora/Vorbis video at 1440×1080 (569 MiB). This uses a pixel aspect ratio of 4:3 so make sure your player’s aspect ratio is set to 16:9 if it looks squished
- Ogg Theora/Vorbis video at 720×540 (101 MiB). Also uses a pixel aspect ratio of 4:3.
- Videos and data created by the ePresence system (low-quality, include slides):
- ePresence page; Flash video with slides
- Ogg Theora/Vorbis video at 320×240 (75.3 MiB). This video becomes out-of-sync during the Q&A period, but should be fine otherwise. If your browser supports the video tag with Ogg Theora/Vorbis, the video will appear below:
- FLV Sorenson/Speex video at 320×240 (66.3 MiB). This is the source video used on ePresence page. I can provide details on how I transcoded it to Theora/Vorbis if there is interest.
- Extra data from ePresence page (slide images, XML data) (3.5 MiB)
- Slide timestamps XML file (part of extra data above). These would be useful for re-implementing the ePresence page using JavaScript and the video tag instead of Flash. If you have some code that does this, feel free to share it in the comments here. It is likely that such a solution could be easily adapted to work with other ePresence videos, including other FOSSLC videos.
Please feel free to re-encode or transform the above videos in whichever ways you wish. Besides syncing the videos with slides in a standards-based way, you may want to trim the videos to the start and end of the talk or re-encode them to a different resolution.
If you would like the source files for the HD version, which are in MPEG-2, please let me know. They are quite large (about 8 GiB in total) so I haven’t posted them here.
Slide errata:
- The “Why do we care?” slide should have mentioned the Gdium Liberty, a netbook that uses a MIPS processor, as an example of a product made by a small company, which does not have the resources to license Flash or codecs. Reducing people’s reliance on Flash and royalty-requiring codecs will allow many more products like this to enter the market. As it is, there are very few small companies making innovative new computers.
- The “What can we do about patented codecs?” and “What can we do about TPMs?” slides should have mentioned alternative music stores like Jamendo, which hosts music freely-licensed by the authors and offers it for download without DRM and in Ogg Vorbis format.
- The “What can we do about proprietary formats?” slide should have mentioned Free Youtube! and Free Slideshare!, which allow you to view YouTube and SlideShare without using a Flash player.
I have created Ogg Theora/Vorbis and XviD/MP3 versions of the excellent documentary RiP: A Remix Manifesto. You can find them at the following locations (Update – 2009-05-10: You can pay what you want for zipped versions of these, which are about 1% smaller, at http://www.ripremix.com/getdownloads/):
- RiP_A_Remix_Manifesto_853x480_Theora_Vorbis.ogv (898 MiB)
- RiP_A_Remix_Manifesto_640x360_XviD.avi (700 MiB)
I encourage you to support the creator by paying what you can at one of these pages:
- http://www.ripremix.com/getdownloads/ – pay what you can to download the original movie files, including a DVD image (this link is only available for people in the US)
- http://www.ripremix.com/donate/ – an easy way for anyone (including people outside the US) to pay what they can for the film
I recommend the Theora/Vorbis version because it is higher-quality (853×480 pixels) and because Theora and Vorbis are royalty-free codecs (see The codec dilemma for why this is important). I also provided an XviD/MP3 version since many DVD players support this format.
If you want to remix the documentary, check out Open Source Cinema, where you can upload your own modifications to it. The videos there and the downloads listed above are licensed under the Creative Commons Attribution-Noncommercial 3.0 Unported license.
Since version 8.04, Ubuntu has used PulseAudio as its default sound system. After hearing of various problems people have had with PulseAudio (like this one and this one), one may wonder why Ubuntu uses PulseAudio at all, especially since these problems can often be fixed by turning off PulseAudio with no ill effects. The rationale for the switch to PulseAudio in Ubuntu is laid out on this page:
https://wiki.ubuntu.com/DesktopTeam/Specs/CleanupAudioJumble
I’m providing this link in the hopes that others don’t have to search as far for the answer as I did. Here are some highlights from that document, describing the benefits of PulseAudio:
Beyond the obvious sound mixing functionality it offers advanced audio features like “desktop bling”, hot-plug support, transparent network audio, hot moving of playback streams between audio devices, separate volume adjustments for all playback or record streams, very low latency, very precise latency estimation (even over the network), a modern zero-copy memory management, a wide range of extension modules, availability for many operating systems, and compatibility with 90% of all currently available audio applications for Linux in one way or another.
The document also has an extensive list of Use cases, which demonstrate where PulseAudio can be useful. While PulseAudio has many interesting features, the majority of these are not exposed to the user through a discoverable user interface so people don’t know they exist and, thus, don’t miss them if they disable PulseAudio.
For those that are interested, I found the above wiki page by searching for Hardy Heron release notes, leading me to this page, which linked to this blueprint, whose full specification is the wiki page.
This is a response to John Dowdell’s Put down the Flavorade and slowly back away…. post, which is itself a reply to Tristan Nitot’s Making video a first class citizen of the Web. Here it is, starting with a quote from John Dowdell’s post:
Continue reading ‘Solving the codec problem’
On April 19, the Release Candidate for Ubuntu 9.04 on ARM was announced. This will most likely be used as a platform for new ARM netbooks, as Canonical previously hinted at. The announcement made several references to a Babbage development board, which piqued my interest. Wanting to learn more about the board, I searched for “babbage i.mx51” but found only a couple pages of results (including BabbageJauntyRCInstall), mostly relating to the Ubuntu 9.04 announcement. Eventually I tried an image search for “i.mx51”, which turned up this image, which I suspect is the Babbage development board:
Here are some more details about the board with references:
Continue reading ‘i.MX51 Babbage development board details’
I’ve tried a few different log file analyzers for getting stats about my web site, but they’ve all been confusing or lacking some features. FireStats is nice, but it only handles part of my site (the blog part) and I have no idea what time period it’s using in its page counts. The Webalizer is also useful, but it doesn’t understand that “/?p=266” is a different URL from “/?p=80” and its filtering options are non-obvious.
So I decided to write my own, filtlog, which you can find at:
http://github.com/ossguy/filtlog
It is very simple and easy to use. Just pass your web server log file or a set of log files to it and it will print a summary of how many hits each page got and which referrers were the most popular for each page. To make your results more accurate, you can add user agent strings of known bots to the config file, which causes those bots to be ignored in the results. You can also make the results more concise by specifying a maximum number of pages you want results for.
Because the code is very simple (50 lines of Ruby), filtlog is an excellent starting point for building more complex analysis tools. It is easy to change filtlog to organize the stats differently or provide other features like user agent tallying.
If you have any questions about filtlog, including how to use it or how the code works, post a comment on this article.
Accepting lines of input that are arbitrarily long is not something the C standard library was designed for. However, it can be an immensely useful feature to have. I recently came across this problem while rewriting the file input parts of libbitconvert. Here’s my solution, modeled after the C standard library’s fgets
:
int dynamic_fgets(char** buf, int* size, FILE* file) { char* offset; int old_size; if (!fgets(*buf, *size, file)) { return BCINT_EOF_FOUND; } if ((*buf)[strlen(*buf) - 1] == 'n') { return 0; } do { /* we haven't read the whole line so grow the buffer */ old_size = *size; *size *= 2; *buf = realloc(*buf, *size); if (NULL == *buf) { return BCERR_OUT_OF_MEMORY; } offset = &((*buf)[old_size - 1]); } while ( fgets(offset, old_size + 1, file) && offset[strlen(offset) - 1] != 'n' ); return 0; }
And here is an example of how to use it:
char* input; int input_size = 2; int rc; input = malloc(input_size); if (NULL == input) { return BCERR_OUT_OF_MEMORY; } rc = dynamic_fgets(&input, &input_size, stdin); if (BCERR_OUT_OF_MEMORY == rc) { return rc; } /* use input */ free(input);
To show you how dynamic_fgets
works, I’ll break into down line by line and then describe some of its features:
Continue reading ‘dynamic_fgets: Reading long input lines in C’