Blogger for Podcasts

Developer Allen Pike thinks the market for a professional podcast app is too small to be viable. It is. Creating tools for existing podcasters is not the interesting part of the market. In order to make a podcasting app viable, the target is not the current universe of podcasters, but in growing the size of that universe. To do that will require creating a tool that makes it easy for more people to create and publish more podcasts. A tool that grows the podcast universe will have to simplifying the multi-ender recording method and the entire process of recording and distributing podcasts. A viable podcasting tool needs to make it simple and easy for anyone to plug in a microphone, record a good-sounding podcast, and distribute it, all without any technical knowledge other than plugging in a USB device and pointing a mouth at a microphone. Oh, and, it probably also needs to be free.

In other words, this is not just audio recording software, but a simple publishing tool, like Blogger for podcasts. That idea sounds familiar. In fact, it’s what Blogger founder Evan Williams hoped to create at Odeo, which launched in 2005. Evhead: How Odeo Happened

I’m super-excited to see where this goes. Podcasting is going to be freakin’ huge. I don’t have time in this post, because it’s 2am and I gotta be on stage at 8am, to give my pitch for why. But it’s the same story as blogging (with several unique charastics of its own), but in a whole new medium that is much bigger than people think. And it’ll happen much, much faster.

(Whatever happened to Odeo? In 2006, they created and launched a messaging service called Twttr, which ended up becoming slightly more popular than podcasting, especially after it voweled-up into a little thing called Twitter. Not surprisingly, Odeo reorganized into Obvious Corp. and sold off the Odeo assets.)

Now, as podcasting is entering its next phase of growth, with smartphones, dedicated podcast listening apps, connected cars, and decent mobile data making it easier and easier to subscribe and listen to podcasts, no tool has made it easier to make podcasts. Apple’s Garage Band now has better tools for making music, but hasn’t drastically improved its tools for making podcasts. FaceTime, Garage Band, iCloud and iTunes could work together very easily to make a very easy podcast production flow for Mac users. It could be possible to enable simple subscriptions for podcasts (and drive revenue to Apple and producers), but it’s a tiny amount compared to iTunes and App Store revenue, and even tinier compared to Apple’s core business.

While Soundcloud, Libsyn and Squarespace have made it easy and cost-effective to host podcasts, there isn’t anything on the market that is as simple and effective as Blogger was to web pages.

Part of this is technical. Recording audio is much more technical than publishing text. But a tool that makes it simple to plug in a $50 USB microphone and get good results without needing a producer, professional audio software, or mixing double-ended recording files would drastically expand the number of podcasters, just like Blogger, Movable Type and WordPress drastically expanded the number of web publishers.

After listening to this week’s ATP (“A Spirited Defense of Pong”) and sketching out some notes, I am less pessimistic than Marco, John and Casey that there is an opportunity. But I think the challenge is even more difficult than they discussed, because it’s not just software, it’s an entire platform.

What does this platform need to do?

1. Multi-end recording

This is the area that Pike discussed in his post. The application needs to create a reliable VOIP connection and each side of the conversation needs to record. Since the same application is running the call and recording, it should be able to help with the first difficult part of publishing a multi-ender podcast, syncing up the recordings. Ideally, if bandwidth allows, the software would silently upload the guests’ recordings to the cloud during the call, so the host has access to all of the audio.

Before even getting to the podcast hosting side of the equation, we’re already talking about major cloud integration, so there’s that.

Oh, and ideally there are at least three versions of the software: a web app, a Mac app and a Windows app. If possible, also iOS and Android.

2. Editing

The recording software needs to make it easy to edit out gaps and digressions. Ideally, it would automatically run something like Overcast’s Smart Speed to get rid of excess gaps. It would highlight in the editor places where people are talking over each other (where more than one track is over the noise floor) and allow the host to quickly mute, cut, or space out the crosstalk.

The software would have to provide access to a simple compressor, limiter, and EQ adjustments. Ideally, it would automatically apply a reasonable amount with a simple “magic setting” tool. That’s obviously trivial to build.

3. Structure

In most cases, the podcaster will likely want to have a regular theme music or intro section. Some may want to have section introductions or to assemble a podcast that uses a combination of field recording, studio segments and discussions. The software will need to provide the structure (in the same way that blogging tools provide templates) and ability to combine different audio bits.

4. Stock media

Podcasters need access to high-quality, free (or inexpensive) royalty-free music and sound effects. There is an opportunity to both provide a quality library as a value-add to users as well as a revenue opportunity to license additional music and sound effects. So this requires a billing system and integrating a third-party API to the stock media library.

5. Hosting

So, hosting and serving media files, no big deal in the cloud era, right? (Aside from paying the storage and CDN bills.) In addition to just hosting the podcast files, this service also needs to have a way to easily build simple attractive websites and valid podcast RSS feeds. Again, this is not a cutting edge problem, but is a matter of non-trivially difficult execution. Oh, and throw in an API to allow third-party recording tools to tie in to the platform.

6. Advertising

A small podcast doesn’t have enough listeners to make it worthwhile for advertisers. But thousands of small podcasts running advertisements, it becomes a viable platform for advertising. Advertising makes it possible to offer this as a free service. Of course, there’s significant capital involved in marketing and advertising the free service enough to become a platform for advertisers. Part of the templating structure is ad units, which the software would drop in after every few minutes. And the template system would have to be flexible enough to allow individual podcasters to control where the ad breaks are and record bumpers.

In order to keep successful podcasts from fleeing the platform, there needs to be revenue sharing with podcasts that generate traffic. So we also need to build the analytics to track and the payment system to pay out the podcasters. Revenue-share podcasters should also have the opportunity to record their own native ads.

And if the product is good enough to entice podcasters who are already professionals, there’s the minor problem figuring out how much to charge for an ad-free version of the service that is profitable and not ridiculously expensive.

7. Products

This is not a necessary part of the service, but it is a revenue opportunity. Since audio recording is dependent on quality equipment, the other business opportunity is in selling kits of microphones and interfaces and accessories. Again, not a difficult problem to solve, but a difficult thing to do well, since you’re adding in wholesale relationships, packaging, retailing and fulfillment.

So, is this a difficult product to build?

Definitely. This is just a high level overview of the problems that would need to be solved in order to create the product. Already, much of the software required to build this exists as part of Mac OS features available to developers and open source projects. AWS, Azure or Google Cloud make it easy to start a web-scale project without buying racks of servers and colocate them. But integrating all of the different pieces, marketing the product and scaling enough to start selling advertising is certainly not trivial.

Is it a business?

Maybe? Using podcasts to distribute media is going to be at least an order of magnitude smaller than the web overall. Audio is a linear medium that takes a set amount of time. It’s deliberately long-form in a world that is becoming increasingly short. The professionals already have workflows and distribution. The only way to tap into that market is to add something that’s better or cheaper than their existing workflow. And “cheaper” likely means burning capital to pay someone else’s expenses and having underwear gnomes figure out how to fill the gap.

Are there enough creators who will want to create podcasts, but don’t because it’s too hard? How many of them will keep with it? How many of them will develop an audience? How do you keep a popular podcast when they can sell their own sponsorships?

Was this a 2005 product that’s missed its opportunity? In a world where YouTube dominates the short form video space and enables creators to make real money, can an audio-only service be relevant outside of the artisanal podcast movement? Or do you add to the costs by supporting video, too?

Concepts, shipping, and secrecy

At Vox, Matt Yglesisas posits that Apple is losing the innovation race to Google: Google wants to reinvent transportation, Apple wants to sell you fancy headphones

There were two striking pieces of business news this week from America’s leading technology brands. On the one hand, Google unveiled a prototype of an autonomous car that, if it can be made to work at scale, promises to end mass automobile ownership while drastically reducing car wreck fatalities and auto-related pollution. Meanwhile, Apple bought a company that makes high-end headphones.…

But that’s exactly why it’s so disappointing to see Apple focused overwhelmingly on small-ball extensions of its existing franchise while Google goes for big plays.

Yglesias posits that one of the reasons that Google can make this big plays and Apple is playing small ball is because Google’s complete control by Brin and Page  (or their lack of lack of accountability to shareholders) allows them the freedom to experiment with big ideas. Apple is beholden to activist traditional shareholders who want the company to release its huge cash reserves to shareholders. 

However, there is no way to actually know if Apple is in fact working on big ideas or just making iPhones in new colors. Apple doesn’t announce new product concepts or share their work in development. Apple creates products, Apple announces products, and Apple ships products. Since Jobs returned to Apple after the NeXT acquisition, it focused on creating and shipping products. See e.g. John Gruber  in 2011 The Type of Companies That Publish Future Concept Videos and Kontra in 2008 Why Apple doesn’t do “Concept Products”. I’m sure that Apple is irking on all kinds of product variations and new product ideas in house. But without signing Apple’s restrictive NDA, we’re not going to know about those new ideas. 

Apple has a culture of obsessive secrecy and Apple employees do not leak information. Last year, at the D10 conference, Tim Cook announced Apple’s plans to “double down on secrecy.” By all indications, this has been successful.  

Tomorrow, Tim Cook and Apple’s senior executives will step on stage at WWDC, their annual developer conference, to announce OS X 10.10, iOS 8, likely introduce Jimmy Iovine and the Beats team. But aside from a “flatter design” or Healthbook app, we have no leaks on what Apple plans to introduce. This could be because either Apple is not coming up with any major innovations, or because Apple doesn’t leak them. The Apple rumors community hasn’t seen screenshots from either operating system. Apple watching is like Kremlinology. Apple doesn’t announce what their plans will be, but analysts have to infer those plans from third party sources of information. This is mostly through the supply chain and potential partners. Under Jobs, Apple was not afraid to be vindictive if partners leaked details about Apple products before Apple announced them. 

The majority of releases about Apple hardware come from sources within its manufacturing partners in Asia, whose employees and contractors are not as strongly incented to protect Apple’s proprietary and confidential information as Apple’s own employees. (This is unfortunately, why I am not optimistic about a new Retina Thunderbolt display or Retina iMac release tomorrow. I really do want a full-sized Retina monitor, but more importantly, a 12” or 13” laptop that can drive a 4K panel.)

From the roundup of rumors reported by Macrumors likely to be announced at WWDC tomorrow, all involve the types of applications and APIs that will rely on integration with third party hardware and/or software: Healthbook (integrating with fitness tracking), song identification (partnering with Shazam), mobile payments (partnering with retailers), smart home integration (partnering with hardware and software). Where rumors seem unlikely (major new hardware announcements) it’s because of the lack of smoke from the hardware supply chain.  Apple’s own innovations do not leak. 

This doesn’t preclude Apple announcing a wholly new product type that is not yet ramped for production. But if Apple only announces products, why would it announce something that it’s not ready to ship? Because, regulatory approval.

The single biggest product announcement that Steve Jobs made was the 2007 introduction of the original iPhone at Macworld. (Back when Apple presented a keynote at Macworld.) Part of what made the keynote so surprising was the audacity of the product. Apple watchers had been expecting an iPhone for a number of years, expected to be some kind of hybrid iPod and mobile phone, maybe with a click wheel or hardware keyboard. Most people were floored by the device, which Jobs announced as three things: a “widescreen iPod with touch controls,” “a revolution mobile phone,” and a “breakthrough internet communications device,” which, by the way, were not three different devices, but just one. 

Even though Apple announced the iPhone in January 2007, the first iPhones didn’t ship until June 27. Apple’s hand was forced to announce before release because the FCC requires manufacturers of wireless devices to obtain regulatory approval of devices that will transmit over the public airwaves. If Apple submitted the iPhone for approval before announcing it, rumors sites and the tech press would have uncovered all of the product details before Apple itself announced. If nothing else, Apple wants to control its message. New categories that require regulatory approval won’t be ramped for production and so we won’t see leaks.

But regulatory approval is also the reason that we are hearing so much about Google’s self-driving cars and Amazon’s drones. These are not only products that would require regulatory approval, but that would require significant changes to rules or legislation in order to be legal to use or sell. Any commercial aircraft, including autonomous aircraft, requires FAA approval. NHTSA is evaluating guidance and regulations on self-driving cars. In addition, each state will have different regulations governing the use of roads and driving standards, multiplying the lobbying burden for obtaining regulatory approval. 

Between attempting to catalogue all the world’s knowledge, create self-driving cars, and by acquiring Boston Dynamics, the creator of various military robots, do we as a society need to worry that Google is building Skynet

DSC04681.JPG

Broadband Universal Service

This past weekend, I spent time with family in the beautiful Catskill mountains. On the rainy day, we all became quickly frustrated with the speed of our pokey DSL internet connection.

Speed testing revealed actual speeds of 1.5 Mbps downloads and 0.4 Mbps uploads.1

In the FCC’s sixth broadband deployment report from 2010, the Commission redefined broadband as a minimum of 4 Mbps down and 1 Mbps up. A report today indicates that the Commission is considering upping the threshold of broadband to 10 Mbps down or higher.

Being far from the central office, Verizon indicated that we are lucky to have DSL at all and can not offer more speed, due to the noise in the length of the copper run.

While cable service is available in the denser Village of Hunter, the local Time Warner franchised monopoly will not run cable to the more widely spread out houses in the Town of Hunter.

So, what options are available?

AT&T has 4G LTE service from a tower on Hunter Mountain. This provides solid speeds, at over 20 Mbps. However, it only offers that speed for a fraction of a month. All of AT&T’s LTE plans have bandwidth caps. 10 GB of data per month at 20 Mb/second is approximately 4000 seconds of peak bandwidth, or just over one hour per month.2

Satellite internet access from HughesNet offers 10 Mbps downloads and 1 Mbps uploads for $60 per month. But it is also capped, at 20 GB per month. At half the speed and twice the cap, HughesNet offers peak bandwidth for nearly four and a half hours per month.3

Of course, even streaming video and downloading files is unlikely to use the full bandwidth available in a connection, so the lowest tier data cap is usefully for more than that. But metered broadband limits the use of the internet, just like metered dial-up AOL access in the mid-90’s.

When you think about the cost of streaming a movie, you’re less likely to stream a movie if there is a marginal bandwidth cost. In the context of entertainment, it’s no big deal. But what about the student trying to access online research and learning resources? Doesn’t it disadvantage students who can not access unmetered broadband, since the more they use internet resources, the more money it costs?

I’m sure there are research studies from the mid-90’s talking about how the transition from hourly dialup AOL to unlimited dialup internet access made the early commercial internet thrive.

In the twentieth century, the federal government undertook the efforts to connect every house in America to the electrical grid and the telephone network. The local phone company is required to provide every house with a dial tone. The local power company is required to provide every house with a connection to the power grid. If we don’t create universal uncapped service for broadband, we will quickly strand rural America back in the twentieth century.

For now, unlimited service throttled to 10 Mbps on a reliable LTE connection is far more useful and productive than 30 Mbps LTE capped at 10 GB of traffic per month. Will the wireless providers have enough spectrum and backhaul to provide that? What about the next generation internet? When will wireless be able to deliver 100 Mbps or 1 Gbps to the home? This is why we need a national initiative — subsidized by federal government — to bring common carrier fiber to every home in America. Allow broadband providers to compete for customers on the network, but require that every home has access to a 100 Mbps connection within the next 5 years. It may not be universal health care or universal end to hunger, but it is what America needs to do to stay competitive and connected.

1I forgot to screen capture the speed test results. This post would have been more impressive with screen caps!)

2Some rough back of the envelope math: 10 GB/month = 80 Gb/month = approximately 80,000 Mb/month. At 20 Mb/second, that is approximately 4,000 seconds of peak bandwidth, or 66.67 minutes.

3Some rough back of the envelope math: 10 GB/month = 80 Gb/month = approximately 80,000 Mb/month. At 20 Mb/second, that is approximately 4,000 seconds of peak bandwidth, or 66.67 minutes.

10261328486_8bda9746b4_k

Google Book Search is a Fair Use

Back in 2005, I wrote that Google Print “may single-handedly keep the copyright-related blog world in business for the next few years.” Eight years later, the Southen District of New York decisively granted Google’s motion for summary judgment that the book scanning project is fair use. The Authors Guild v. Google (SDNY, Nov. 14, 2013)
The book search does not provide a competitive substitute for the actual book:

“An ‘attacker’ who tries to obtain an entire book by using a physical copy of the book to string together words appearing in successive passages would be able to obtain at best a patchwork of snippets that would be missing at least one snippet from every page and 10% of all pages.”

1. The Purpose and Character of the Use
Google use of the scanned books’ text to create a search index and display search result snippets is “highly transformative. Google Books digitizes books and transforms expressive text into a comprehensive word index that helps readers, scholars, researchers, and others find books.”
While books are used to convey information, Google uses the text differently:

“Google Books thus uses words for a different purpose — it uses snippets of text to act as pointers directing users to a broad selection of books.
Similarly, Google Books is also transformative in the sense that it has transformed book text into data for purposes of substantive research, including data mining and text mining in new areas, thereby opening up new fields of research. Words in books are being used in a way they have not been used before. Google Books has created something new in the use of book text — the frequency of words and trends in their usage provide substantive information.
Google Books does not supersede or supplant books because it is not a tool to be used to read books. Instead, it “adds value to the original” and allows for “the creation of new information, new aesthetics, new insights and understandings.” Leval, Toward a Fair Use Standard, 103 Harv. L. Rev. at 1111. Hence, the use is transformative.

Even though Google is a commercial enterprise, it isn’t using the book scans in a commercial manner: “Here, Google does not sell the scans it has made of books for Google Books; it does not sell the snippets
that it displays; and it does not run ads on the About the Book pages that contain snippets. It does not engage in the direct commercialization of copyrighted works.”
Thus, the first factor “strongly favors” a finding of fair use.
Would the outcome here be different is Google ran ads against book content and searches? If it sold books through its own book store?
2. The Nature of Copyrighted Works
Books are the paradigmatic protectable copyrighted works — after all, copyright wouldn’t exist but for books. But works of fiction are entitled to greater protection than non-fiction books. Most of the books scanned by Google are non-fiction. Further, the scanned books are published and available to the public, which favors a finding of fair use.
3. Amount and Substantiality of the Portion Used
Google does scan the entirety of the works. However, full-text copying is required in order to be able to index and search the books. “Significantly, Google limits the amount of text it displays in in response to a search.” Because Google scans the entire works, the third factor weighs slightly against a finding of fair use.
4. Effect of Use Upon Potential Market or Value
Google’s book search does not replace or compete with actual books.

“Google does not sell its scans, and the scans do not replace the books. While partner libraries have the ability to download a scan of a book from their collections, they owned the books already — they provided the original book to Google to scan. Nor is it likely that someone would take the time and energy to input countless searches to try and get enough snippets to comprise an entire book. Not only is that not possible as certain pages and snippets are blacklisted, the individual would have to have a copy of the book in his possession already to be able to piece the different snippets together in coherent fashion.
To the contrary, a reasonable factfinder could only find that Google Books enhances the sales of books to the benefit of copyright holders. An important factor in the success of an individual title is whether it is discovered — whether potential readers learn of its existence. Google Books provides a way for authors’ works to become noticed, much like traditional in-store book displays. Indeed, both librarians and their patrons use Google Books to identify books to purchase.”

The fourth factor weighs strongly in favor of a finding of fair use.
Finally, Judge Chin rules, “Google Books provides significant public benefits. It advances the progress of the arts and sciences, while maintaining respectful consideration for the rights of authors and other creative individuals, and without adversely impacting the rights of copyright holders.”
This is a decisive ruling that scanning book content for indexing, searching, and educational purposes is fair use.
Discussion and Commentary
Evan Brown, Information Law Group, What the Google Book Search Fair Use Decision Means For Innovators: “Google’s use of technology in this situation was disruptive. It challenged the expectation of copyright holders, who used copyright law to challenge that disruption. It bears noting that in the court’s analysis, it assumed that copyright infringement had taken place. But since fair use is an affirmative defense, it considered whether Google had carried its burden of showing that the circumstances warranted a finding that the use was fair. In this sense, fair use serves as a backstop against copyright ownership extremism. Under these particular circumstances — where Google demonstrated incredible innovation — that backstop provided room for the innovation to take root and grow. Technological innovators should be encouraged.”
Matthew Sag, Google Books held to be fair use: “Unless today’s decision is overruled by the Second Circuit or the Supreme Court — something I personally think is very unlikely –, it is now absolutely clear that technical acts of reproduction that facilitate purely non-expressive uses of copyrighted works such as books, manuscripts and webpages do not infringe United States copyright law. This means that copy-reliant technologies including plagiarism detection software, caching, search engines and data mining more generally now stand on solid legal ground in the United States. Copyright law in the majority of other nations does not provide the same kind of flexibility for new technology.”
Ali Sternburg, DisCo Project, Google Books Opinion is a Win for Fair Use and Permissionless Innovation: “One key takeaway from this case is validating that companies can invest resources into creating tools that benefit the public without seeking permission from gatekeepers, if their efforts are transformative, which can involve copying and digitizing entire works.”
Joe Mullin, Ars Technica, Google Books ruled legal in massive win for fair use “In the long term, the failure to settle may result in more scanning, not less. If Chin’s ruling stands on appeal, a clean fair-use ruling will make it easier for competitors to start businesses or projects based on scanning books—including companies that don’t have the resources, legal or otherwise, that Google has.”
Timothy B. Lee, The Washington Post, Google Books ruling is a huge victory for online innovation “If the ruling is upheld on appeal, it will represent a significant triumph for Google. More important, it would expand fair use rights, benefiting many other technology companies. Many innovative media technologies involve aggregating or indexing copyrighted content. Today’s ruling is the clearest statement yet that such projects fall on the right side of the fair use line.”
Adam L. Penenberg, The Google Books decision is good for authors and readers “Although the two litigants were the Authors Guild and Google, and the guild vows to appeal the decision, it doesn’t represent my views. I’m glad it lost. I don’t agree that Google robs authors of income, because the vast majority of us don’t make a cent off our books in the years after they are published. If Google is willing to take on the task of scanning each book and making them searchable, then setting up a way for people to be able to buy them right there and then, it should also get a cut of the action.”
Will Oremus, Slate, Google Books Ruling a Win for Fair Use … and Rich Tech Companies: “The trick, it seems, is to steal so aggressively and profit so much that by the time the lawsuits hit, you’re rich enough to fend them off.”
David Kravets, Wired, Google’s Book-Scanning Is Fair Use, Judge Rules in Landmark Copyright Case “Google’s massive book-scanning project that makes complete copies of books without an author’s permission is perfectly legal under U.S. copyright law, a federal judge ruled today, deciding an 8-year-old legal battle.”

Disrupt my TV, please

At Time’s Techland Blog, Ben Bajarin writes:

Why We Want TV to Be Disrupted So Badly.

I was at the Consumer Electronics Show where [Tivo and ReplayTV] debuted, and their booths were as packed as any on the show floor. Both offered such a simple premise: pause, rewind and fast forward live TV. In my opinion, these two companies paved the way for the disruption we will eventually see. Why? Because they showed us how much better our TV experience could be, and how crappy the technology was that our current television providers provided us with.
I remember having discussions with executives at both TiVo and ReplayTV during their startup years. In particular, I remember a conversation with Anthony Wood, one of the founders of ReplayTV and the now founder and CEO of Roku. I asked Anthony why the current TV providers didn’t think of this first. His answer, plain and simple, was “because they are not technology companies.” So profoundly true. And the fact that they are not technology companies is the simple reason so many of us in the tech industry want TV to be disrupted. We know the technology and the experience can be so much better.

No. The reason that the existing TV companies weren’t thinking about innovating the TV experience is not because they are not technology companies (which they are), but simply because they don’t have to. The market to deliver television and broadband is not competitive. The major cable providers don’t compete with each other in the same market. Whether any particular household subscribes to television service through Comcast or Time Warner or Cablevision depends not on that household’s choice to pick one cable provider over another, but by the local monopoly franchise granted to a cable provider.
Cable companies are not competing with each other to win market share at the consumer level, but are competing with each other to win market share at the municipal level. They compete for the franchise right. So there’s no need to push forward with technology to make the viewing experience better — only to be generally competitive with other cable providers in other markets so as to prevent an overwhelming groundswell of desire to change.
If the cable companies competed directly for the same customers, the quality of the product and experience would be far more customer friendly.
In most regions, consumers have few other options for internet or television service than their local cable monopoly. DSL internet service from the phone company is no longer competitive with the speeds that cable modems can offer. Satellite television service requires installing a satellite dish and service can be disrupted by bad weather.
In New York City, Verizon is supposed to provide competitive broadband/video fiber optic service to all households by June 30, 2014, but many areas of the city still lack the access to the competitive fiber optic network. NYC Public Advocate (and mayoral candidate) Bill Diblasio notes Verizon is not yet serving many areas of New York with Fios. Outside of the Fios service area, Google is wiring cities with fiber optics, and an impressively ambitious internet and TV service, but its rollout is limited to Kansas City (and then coming to Provo, UT and Austin, TX.) Otherwise, no cable company has to deal with a truly competitive service provider. Arms-length competition, where providers simply need relative parity to each other, doesn’t force providers to innovate in the same way that they would with direct competition.
And since Tivo and ReplayTV launched more than a decade ago, the DVR market has become less innovative and competitive. In more than seven years since Tivo introduced its first HD device (the Series 3), the Tivo software interface still is not fully updated to HD — a substantial amount of the user interface in the latest Premiere DVRs has been carried over directly from the decade-old Series 2 design. In fact, for sharing recorded content around the house, many cable company solutions are better than Tivo.
ReplayTV was forced out of business through litigation over its automatic commercial skipping feature. Cable providers are competing successfully with Tivo not by offering DVR that is functionally competitive with Tivo’s offering, but by offering DVR service that works well enough for most viewers, is easier to install, and is a single fee with the cable bill.
If cable providers had to compete with each other for customers, the quality of the television viewing experience would be orders opt magnitude better than it is today. But fortunately, we are on the cusp of a period of rapid, transformative innovation in the television space.
Innovation is coming not because the cable television market is becoming any more competitive, but despite the best efforts of the cable companies to prevent consumer-friendly change.
Most broadband connections (largely through cable companies) are fast enough to stream HD-quality video reliably. Devices to stream internet content to an actual television are inexpensive and work reasonably well. Netflix, Amazon, iTunes, Hulu, HBO GO, ESPN, MLB, the NBA and the NHL all stream high-quality content to Roku and/or Apple TV that make it possible to replace cable television with on-demand access to a vast library of quality content and/or live sports. And although some cable providers do not authenticate their users for HBO GO access on Roku or AppleTV devices, the increasing quality and availability of streaming content is forcing cable companies to actually compete not just with one competitive cable box provider, but with the wealth of video programming on the entire internet. And so, to be competitive and keep customers spending on video programming, rather than just treating the cable company as a broadband provider, the cable companies have to offer the ability to time-shift or place-shift content, whether by video streaming to tablets, access to on-demand programming, or network-based DVRs.
The oft-maligned bundling of cable channels actually providers more value, at least in terms of the breadth of programming available, compared with ala carte internet video.
So, the problem isn’t that cable providers aren’t technology companies — that assertion is preposterous considering that cable providers are also the primary provider of home broadband in the US. The reason that the television industry is ripe for disruption is because the consumer market is non-competitive.

Transparency May Be Required

Apple’s Developer Site was hacked. All Things D reports; Apple Developer Center Was Hacked; Site Remains Down While Company Overhauls Security
In their notification, Apple notes that they are letting developers know about this attack “in the spirit of transparency.”
Without knowing more information about what information was obtained through the data breach incident, there are a number of scenarios where state laws would require that Apple notify its users that their personal information may have been accessed by an unauthorized third party.
In the US, each of the fifty states (as well as DC and Puerto Rico) has its own data breach notification law. Compliance is based not on the state in which an entity that stores personal information actually resides or stores that information, but, because we consider privacy to be a personal right, it is based on the home state of the person whose data is being stored.
Most states define personal information to include:

An individual’s first name or first initial and last name plus one or more of the following data elements: (i) Social Security number, (ii) driver’s license number or state- issued ID card number, (iii) account number, credit card number or debit card number combined with any security code, access code, PIN or password needed to access an account and generally applies to computerized data that includes personal information. Personal Information shall not include publicly available information that is lawfully made available to the general public from federal, state or local government records, or widely distributed media. In addition, Personal Information shall not include publicly available information that is lawfully made available to the general public from federal, state, or local government records.

But, some states have a broader definition of personal information than this. Some states require that the state is notified in case of a data breach that affects a certain number of residents. Some states offer a safe harbor from notification if personal information is encrypted and not access in an unencrypted format.
BakerHostetler has a straightforward and comprehensive summaries of data breach notification laws Data Breach Charts. With each of the states having a different requirement, Apple’s notice to its developers wasn’t solely in the spirit of transparency, but also in the spirit of legal compliance.
A security researcher claims to have accessed secure Apple data after filing a bug report to encourage Apple to fix the hole that he found. iMore reports Security researcher claims to have reported bugs shortly before Apple took down its developer portal. Whether or not the data was leaked by a white hat hacker instead of a black hat hacker, that doesn’t affect the fact that personal data was delivered to a third party, which requires the company storing the personal data to report it to the individuals, and depending on the number of people affected, also to certain states.
Last week, the House Energy & Commerce Committee Subcommittee on Commerce, Manufacturing, and Trade held hearings on whether a federal data breach notification statute is necessary. Subcommittee Explores State of Data Breaches in United States
Earlier this month, the California Attorney General released her report on data breaches affecting California residents in 2012, when 2.5 million Californians had personal information put at risk through an electronic data breach, but more than half of those citizens’ would have been protected if the companies storing their personal data better encrypted the data.

API Madness

This week, the inter webs went all aflutter when Michael Sippey of Twitter announced the Changes coming in Version 1.1 of the Twitter API.
In general, Twitter is seeking to more tightly control the user experience and discourage active development of third-party client applications. Yet it seems like so much of the success of Twitter comes from the origin in lack of control. It was simple and the users built most of the conventions that Twitter relies on.
For a service like Twitter that is so simple and basic, will attempting to make it into something different end up killing it off? Will App.net or something else be the Facebook to Twitter’s Friendster or Myspace?
Even though much of the use of Twitter is on its own website, it seems like the most active users, and the reason that the service became successful comes form client software, all of which came from third-parties. Twitter’s official clients were originally written independently by Loren Brichter as Tweetie and then acquired (and then apparently left for dead.)
As Twitter is trying to build itself into a business, it’s also changing to dictate how the service is used, rather than building on the conventions that have evolved.
Web communities tend to take on their own unique and individual character and personality. Some, like Metafilter or Reddit are largely supportive and collaborative. Others, like 4chan or Funnyjunk take on personalities that are more anarchic or antagonistic. The communities that tend to have stronger community values are the ones who tend to have stronger moderation enforcing community norms, whether that is individual moderators like at Metafilter or the community norms that Twitter’s users established. In particular, the @username convention and the #hashtag convention both came from use, not from Twitter.
Image uploads were supported by third-party clients long before Twitter launched it’s own image hosting service.
And while if hoping to extend the Twitter service and sell it to advertisers, it makes more sense for it to be a website rather than a service that works across different software. But it seems more likely to alienate the user and developer ecosystem that Twitter enables. And because Twitter as it is today provides tremendous value to the users and developers, trying to recapture some of the value from the users and developers, rather than sell those users’ attention to advertisers seems like the better way to capture value, because it will encourage the users to use the site more.
By carefully and narrowly designating what the Twitter service is, rather than listening to what the most active users want, is Twitter going to be driving its most active users and third-party developers away from its service?
The most-active Twitter users seems to interact with the service mainly through Tweetdeck*, Tweetbot, or the rapidly stagnating Twitter apps rather than the website.
*Yes, Twitter own Tweetdeck, but it seems to be a vastly different experience than the Twitter website.
Developer Rules of the Road,
Terms of Service and Display Guidelines, which will become display rules.
Marco.org, Interpreting some of Twitter’s API changes: “I sure as hell wouldn’t build a business on Twitter, and I don’t think I’ll even build any nontrivial features on it anymore.”

Doubling Down

Here’s an example of how overly aggressive tactics blow up in one’s face. And then taking that explosion and doubling down aggressively.
Matthew Inman writes and publishes The Oatmeal, one of the funniest comics on the web. Users at Funnyjunk.com reposted many of Inman’s comics. So Inman asked his readers how he should respond and then had some dialogue with the proprietor and denizens of Funnyjunk.
Then last week, Inman received a demand letter from Funnyjunk: FunnyJunk is threatening to file a federal lawsuit against me unless I pay $20,000 in damages. Alleging that The Oatmeal violated made false accusations of willful copyright infringement and infringed on Funnyjunk’s rights under the Lanham Act, Funnyjunk’s attorney demanded $20,000.
Inman’s attorney replied, as did Inman, who used IndieGoGo to ask his readers to raise the $20,000 and donate it to the National Wildlife Foundation and the American Cancer Society (as well as send a crude cartoon to the owners of Funnyjunk.)
After Inman raised more than $100,000, Funnyjunk’s attorney Charles Carreon went full Rakofsky to personally sue not only Inman, but also IndieGoGo, the National Wildlife Foundation and the American Cancer Society. The Oatmeal v. FunnyJunk, Part IV: Charles Carreon Sues Everybody: “On Friday, June 15, 2012, attorney Charles Carreon passed from mundane short-term internet notoriety into a sort of legal cartoon-supervillainy.”
Wow.