Pudgy Penguins come to Melbourne

Last week, I got to chill out with some of the Pudgy Penguins crew, as they launched the Oceania chapter of their NFT community. In case you weren’t aware, Pudgy Penguins are one of the top NFT collections, and have built a loyal fan base for these digital characters.

I went to a major Pudgy Penguin “Pengu Fest” in Hong Kong last year, and got to see first hand how engaged their members are. I also gained some insights as to how this ecosystem enables their NFT holders to license the IP associated with their individual characters into royalty-based income. In short, a subset of the NFT characters are chosen to be turned into merchandise. (For example, Pudgy Penguin soft toys are available in major stores such as Walmart in the USA, and Big W in Australia.) Owners of the selected NFTs earn a percentage of the sales revenue (less tax and production costs etc.).

The most recent collection of Pudgy collectibles are the Igloo figurines, which include early online access to Pudgy World. As a proud owner of one of these plastic figures, I’m still not sure what I have let myself in for…

As well as local meetups, other ways in which the community can interact include a trading card game called Vibes, also launched via the Overpass IP licensing platform.

Igloo Inc, the parent company to Pudgy Penguins and Overpass, has also announced it is launching a Layer 2 blockchain on Ethereum, to be called Abstract, and is being positioned as a “the blockchain for consumer crypto”.

Whatever your views on crypto, NFTs, on-line worlds and collectibles, there is no doubt that Pudgy Penguins have set themselves up with the admirable goals of building a healthy and inclusive community, underpinned by the twin pillars of individual creativity and positive culture.

To crypto sceptics (and the merely crypto curious), the “community” and the enthusiasm of its members could resemble something of a cult. Someone did say during last week’s panel discussion that “I am my penguin, and my penguin is me”. But there are worse things for people to get involved with – and for younger people (I don’t regard myself as part of the Pudgy core demographic), I can see the appeal. For example, your Pudgy Penguin PFP can act as a protective avatar as you engage and explore online – allowing you to share only the personal information that you want to, while you build up trust with other community participants, and before you choose to meet IRL.

There was also a discussion about the difference between meme coins and NFTs – the short answer is that the former represent pure speculation, while the latter aim to create value for their holders. In fact, someone suggested that meme coin trading is not that different to punting on betting apps. But since most NFT collections are well down on their market highs of a couple of years ago, maybe NFT holders and communities like Pudgy Penguins are trying to convince themselves that they are still backing a winner?

Overall, however, I remain positive to the opportunities that NFTs represent – especially in the creative fields, and as a new model for IP licensing. Even if cute flightless birds from the southern hemisphere are not your thing, I don’t think you can dismiss or ignore the social, cultural and economic impact that NFTs will have.

Next week: “When I’m Sixty-Four”

 

 

AI and the Human Factor

Earlier this month, I went to the Melbourne premiere of “Eno”, a documentary by Gary Hustwit, which is described as the world’s first generative feature film. Each time the film is shown, the choice and sequencing of scenes is different – no two versions are ever the same. Some content may never be screened at all.

I’ll leave readers to explore the director’s rationale for this approach (and the implications for film-making, cinema and streaming). But during a Q&A following the screening, Hustwit was at pains to explain that this is NOT a film generated by AI. He was also guarded and refrained from revealing too much about the proprietary software and hardware system he co-developed to compile and present the film.

However, the director did want to stress that he didn’t simply tell an AI bot to scour the internet, scrape any content by, about or featuring Brian Eno, and then assemble it into a compilation of clips. This documentary is presented according to a series of rules-based algorithms, and is a content-led venture curated by its creator. Yes, he had to review hours and hours of archive footage from which to draw key themes, but he also had to shoot new interview footage of Eno, that would help to frame the context and support the narrative, while avoiding a banal biopic or series of talking heads. The result is a skillful balance between linear story telling, intriguing juxtaposition, traditional interviews, critical analysis, and deep exploration of the subject. The point is, for all its powerful capabilities, AI could not have created this film. It needed to start with human elements: innate curiosity on the part of the director; intelligent and empathetic interaction between film maker and subject; and expert judgement in editing the content – as a well as an element of risk-taking in allowing the algorithm to make the final choices when it comes to each screened version.

That the subject of this documentary is Eno should not be surprising, either. He has a reputation for being a modern polymath, interested in science and technology as well as art. His use of Oblique Strategies in his creative work, his fascination with systems, his development of generative music, and his adoption of technology all point to someone who resists categorisation, and for whom work is play (and vice versa). In fact, imagination and play are the two key activities that define what it is to be human, as Eno explored in an essay for the BBC a few years ago. Again, AI does not yet have the power of imagination (and probably has no sense of play).

Sure, AI can conjure up all sorts of text, images, video, sound, music and other outputs. But in truth, it can only regurgitate what it has been trained on, even when extrapolating from data with which it has been supplied, and the human prompts it is given. This process of creation is more akin to plagiarism – taking source materials created by other people, blending and configuring them into some sort of “new” artefact, and passing the results off as the AI’s own work.

Plagiarism is neither new, nor is it exclusive to AI, of course. In fact, it’s a very natural human response to our environment: we all copy and transform images and sounds around us, as a form of tribute, hommage, mimicry, creative engagement, pastiche, parody, satire, criticism, acknowledgement or denouncement. Leaving aside issues of attribution, permitted use, fair comment, IP rights, (mis)appropriation and deep fakes, some would argue that it is inevitable (and even a duty) for artists and creatives to “steal” ideas from their sources of inspiration. Notably, Robert Shore in his book about “originality”. The music industry is especially adept at all forms of “copying” – sampling, interpolation, remixes, mash-ups, cover versions – something that AI has been capable of for many years. See for example this (limited) app from Google released a few years ago. Whether the results could be regarded as the works of J.S.Bach or the creation of Google’s algorithm trained on Bach’s music would be a question for Bach scholars, musicologists, IP lawyers and software analysts.

Finally, for the last word on AI and the human condition, I refer you to the closing scene from John Carpenter’s cult SciFi film, “Dark Star”, where an “intelligent” bomb outsmarts its human interlocutor. Enjoy!

Next week: AI hallucinations and the law

 

 

More on Music Streaming

A coda to my recent post on music streaming:

Despite the growth in Spotify‘s subscribers (and an apparent shift from free to paid-for services), it seams that the company still managed to make a loss. Over-paying for high-profile projects can’t have helped the balance sheet either….

Why is it so hard for Spotify to make money? In part, it’s because streaming has decimated the price point for content. This price erosion began with downloads, and has accelerated with streaming – premium subscribers don’t bother to think about how little they are paying for each time they stream a song, they have just got used to paying comparatively little for their music, wherever and whenever they want it. So they are not even having to leave their screen or device to consume content – whereas, in the past, fixed weekly budgets and the need to visit a bricks and mortar shop meant record buyers were probably more discerning about their choices.

Paradoxically, the reduced cost of music production (thanks to cheaper recording and distribution technology) means there is more music being released than ever before. But there is a built-in expectation that the consumer price must also come down – and of course, with so much available content, there has to be a law of diminishing returns – both in terms of quality, and the amount of new content subscribers can listen to. (It would be interesting to know how many different songs or artists the average Spotify subscriber streams.)

While some artists continue to be financially successful in the streaming age (albeit backed up by concert revenue and merchandising sales), it means there is an awfully long tail of content that is rarely or never heard. Even Spotify has to manage and shift that inventory somehow, so that means marketing budgets and customer acquisition costs have to grow accordingly (even though some of the promotion expenses can be offloaded on to artists and their labels).

Not only is streaming eroding content price points, in some cases, it is also at risk of eroding copyright. Recently it was disclosed that Twitter (now X) is being sued by music companies for breach of copyright.

You may recall that just over 10 years ago, a service called Twitter Music was launched with much anticipation (if not much fanfare…). Interestingly, part of the idea was that Twitter Music users could “integrate” their Spotify, iTunes or Rdio (who…?) accounts. It was also seen as a way for artists to engage more directly with their audience, and enable fans to discover new music. Less than a year later, Twitter pulled the plug.

One conclusion from all of this is that often, even successful tech companies don’t really understand content. The classic case study in this area is probably Microsoft and Encarta, but you could include Kodak and KODAKOne – by contrast, I would cite News Corp and MySpace (successful content business fails to understand tech). I suppose Netflix (which started as a mail-order DVD rental business) is an example of a tech business (it gained patents for its early subscription tech) that has managed to get content creation right – and its recent drive to shut down password sharing looks like it is paying dividends.

Of all its contemporaries, Apple is probably the most vertically integrated tech and content company – it manufactures the platform devices, manages streaming services, and even produces film and TV content (but not yet music?). In this context, I would say Google is a close second (devices, streaming, dominates on-line advertising, but does not produce original content), with Amazon someway behind (although it has had a patchy experience with devices, it has a reasonable handle on streaming and content creation).

All of which makes it somewhat surprising that Spotify is running at a loss?

Next week: Digital Identity – Wallets are the key?

 

 

AI vs IP

Can Artificial Intelligence software claim copyright in any work that was created using their algorithms?

The short answer is “no”, since only humans can establish copyright in original creative works. Copyright can be assigned to a company or trust, or it can be created under various forms of creative commons, but there still needs to be a human author behind the copyright material. While copyright may lapse over time, it then becomes part of the public domain.

However, the extent to which a human author can claim copyright in a work that has been created with the help of AI is now being challenged. A recent case in the USA has determined that the author of a graphic novel, which included images created using Midjouney, cannot claim copyright in those images. While it was accepted that the author devised the text and other prompts that the software used as the generative inputs, the output images themselves could not be the subject of copyright protection – meaning they are either in the public domain, or they fall under some category of creative commons? This case also indicates that, in the USA at least, failing to declare the use of AI tools in a work when applying for copyright registration may result in a rejected application.

Does this decision mean that the people who write AI programmes could claim copyright in works created using their software? Probably not – as this would imply that Microsoft could establish copyright in every novel written using Word, especially its grammar and spelling tools.

On the other hand, programmers and software developers who use copyright material to train their models may need to obtain relevant permission from the copyright holders (as would anyone using the AI tools and who uses copyright content as prompts), unless they could claim exemptions under “fair dealing” or “fair use” provisions.

We’re still early in the lengthy process whereby copyright and other intellectual property laws are tested and re-calibrated in the wake of AI. Maybe the outcomes of future copyright cases will depend on whether you are Ed Sheeran or Robin Thicke….

Next week: Customer Experience vs Process Design