Non-linear Career Development

I’ve recently been asked several times, mainly by younger people in their 20s, what do I actually do for work, and how did I end up doing what I do?

Regular readers will know that career development is a topic I have commented on many times in past articles, either as a result of my coaching and consulting engagements, or in response to the current state of the world.

This month marks 10 years since I started working in the crypto and digital asset industry. While it’s not a full-time job, and I serve as a freelance consultant, it’s now the longest period of continuous “employment” I have had in a single role, sector or organisation. Not a bad gig for something that started out almost by accident – it certainly wasn’t part of a well-planned, linear and structured career path!

If I look back on my career, there is probably one constant factor – that at heart, I am an editor, and by and large, I have always worked in “content”, whether in traditional publishing, on-line data, or new media. My specific roles and the organisations I have worked for have been varied, but the output or format has been consistent.

After graduating in law in the early 1980s, I spent a stressful and frustrating few years as a paralegal in local government, helping people with housing difficulties or facing homelessness. Within 5 years, I was burned out, and needed a change.

So I retrained, and completed an evening class in journalism and sub-editing, run by a couple of senior editors from Fleet Street. However, my aspirations of working for glossy titles or cultural magazines came to nothing, as by this time I was probably too old to be hired as a trainee journalist or on a graduate program. Luckily, I spotted an ad for “legal editors”, and putting my formal qualification together with my recent night school learning meant I was exactly in the right place at the right time.

That initial foray into publishing took me from London, to Hong Kong, and then to Australia, and along the way I transitioned into financial services, market data, international roles, business development, product management and digital assets. And I still use my legal knowledge every day, and “content in context” (hence the name of this blog) is relevant to everything I do.

Fast forward to 2026, and here I am running a media company serving the crypto industry. (More on that next week).

Looking back, there was no master plan, or grand strategy. My curiosity just kept pulling me from one industry or one role to the next.

1. Law taught me how to think.
2. Publishing taught me how to communicate.
3. Capital markets taught me financial infrastructure.

And when I walked into a Bitcoin pitch night in Melbourne more than 10 years ago, I felt at home (which is perhaps a little weird when you think about the somewhat impersonal, anonymous and 100% on-line world of crypto).

I appreciate that my career path looks messy from the outside, and it’s not for everyone, but it all fits in the bigger picture.

I didn’t become a lawyer, but I use legal thinking every day.

I left traditional finance 15 years ago, but that background is largely the reason I ended up working in crypto and digital assets.

If you’ve had a non-linear career, you will probably recognise the following:

Every skill you have picked up, every industry you wandered into, and every unplanned detour has been accumulating in the background.

You don’t necessarily connect the dots looking forward, you only ever connect them looking back.

But in the end, it all fits in the bigger picture.

Next week: My 10 Years in Crypto

=======

My thanks to Simian Giria for helping to initiate this topic.

More on AI Hallucinations…

The mainstream adoption of AI continues to reveal the precarious balance between the benefits and the pitfalls.

Yes, AI tools can reduce the time it takes to research information, to draft documents and to complete repetitive tasks.

But, AI is not so good at navigating subtle nuances, interpreting specific context or understanding satire or irony. In short, AI cannot “read the room” based on a few prompts and a collection of databases.

And then there is the issue of copyright licensing and other IP rights associated with the original content that large language models are trained on.

One of the biggest challenges to AI’s credibility is the frequent generation of “hallucinations” – false or misleading results that can populate even the most benign of search queries. I have commented previously on whether these errors are deliberate mistakes, an attempt at risk limitation (disclaimers), or a way of training AI tools on human users. (“Spot the deliberate mistake!) Or a get-out clause if we are stupid enough to rely on a dodgy AI summary!

With the proliferation of AI-generated results (“overviews”) in basic search queries, there is a tendency for AI tools to conflate or synthesize multiple sources and perspectives into a single “true” definition – often without authority or verified citations.

A recent example was a senior criminal barrister in Australia who submitted fake case citations and imaginary speeches in support of a client’s case.

Leaving aside the blatant dereliction of professional standards and the lapse in duty of care towards a client, this example of AI hallucinations within the context of legal proceedings is remarkable on a number of levels.

First, legal documents (statutes, law reports, secondary legislation, precedents, pleadings, contracts, witness statements, court transcripts, etc.) are highly structured and very specific as to their formal citations. (Having obtained an LLB degree, served as a paralegal for 5 years, and worked in legal publishing for more than 10 years, I am very aware of the risks of an incorrect citation or use of an inappropriate decision in support of a legal argument!!!)

Second, the legal profession has traditionally been at the forefront in the adoption and implementation of new technology. Whether this is the early use of on-line searches for case reports, database creation for managing document precedents, the use of practice and case management software, and the development of decision-trees to evaluate the potential success of client pleadings, lawyers have been at the vanguard of these innovations.

Third, a simple document review process (akin to a spell-check) should have exposed the erroneous case citations. The failure to do so reveals a level laziness or disregard that in another profession (e.g., medical, electrical, engineering) could give rise to a claim for negligence. (There are several established resources in this field, so this apparent omission or oversight is frankly embarrassing: https://libraryguides.griffith.edu.au/Law/case-citators, https://guides.sl.nsw.gov.au/case_law/case-citators, https://deakin.libguides.com/case-law/case-citators)

In short, as we continue to rely on AI tools, unless we apply due diligence to these applications or remain vigilant to their fallibility, we use them at our peril.

 

Does age matter?

When it comes to standing for President, how old is “too old”? When it comes to travelling alone abroad, how young is “too young”?

In the first example, Donald Trump mocked his opponent, Joe Biden about his age and infirmity. Now Trump could become the oldest ever candidate to be elected President, but he doesn’t countenance any criticism of his own mental or physical frailty….

In the second example, a parent has been criticised for allowing their 15-year old son to go Interrailing around Europe, with friends, but minus any adult supervision. The teenager doesn’t appear to have come to any harm – and has probably gained some maturity in the process!

When it comes to the US Presidency, first Trump and then Biden set the record for being the oldest candidates to assume Office (both being in their 70s at the time of their respective inaugurations). In general, Presidents get elected in their 50s or 60s; in the post-war era, only three Presidents have been elected in their 40s – JFK, Clinton and Obama. Meanwhile, across the Atlantic, at the age of 61, Keir Starmer is the oldest person to become British Prime Minister since his Labour predecessor, James Callaghan, who took Office in 1976. I’m not sure what conclusions we can draw from this, but it’s interesting to note that while many countries have mandatory retirement ages for Judges, it seems there is no upper age limit to becoming (or remaining) President, Prime Minister or Head of State. So while old age may be seen as a barrier to dispensing justice in a Court of Law, there is no such concern about exercising political power.

Obviously, age should not be the sole or primary criteria for measuring one’s ability to perform one’s role, to fulfil one’s obligations and to meet one’s responsibilities. Factors such as capacity, cognition, experience, character and overall fitness (physical, mental and moral) should be the basis on which candidates are to be assessed and evaluated.

At the other end of the spectrum, there are several areas where the legal minimum age is being debated: for example, the age of criminal responsibility; the age when children and teenagers should be allowed access to social media; and the voting age. There are also related discussions on the age of consent, marriage, reproductive rights, access to birth control, and censorship controls.

While it is understandable and desirable to protect minors from harm (both by themselves and by others), setting universal minimum ages is not that easy. Individual children and adolescents develop at different rates – biology is simply not that uniform or consistent! I’m sure we all know of teenagers who are far more mature and responsible than adults in their 20s (and even 30s).

Part of the problem is that a fixed age limit does not allow for any sort of transition period. For example, at age 17 years and 364 days, I’m not allowed to buy alcohol; one day later, I can fill my boots! Logic and common sense would suggest that if teenagers had the opportunity to consume alcohol in moderation, in appropriate social and public settings, they would have a much better appreciation for its effects and greater understanding of their personal tolerance, without getting themselves into trouble.

My concern is that in too many areas we are denying young people any control over their own choices and decision-making, and as a result we are absolving them from any personal responsibility. Consequently, as a society we are undermining the concept of individual accountability; when things go wrong as a result of their own choices and actions – whether deliberate, reckless, negligent, careless, inconsiderate or simply idiotic – it’s other people who are left to pick up the pieces. The situation is not helped by the inconsistencies inherent in our definitions of “minor”, “legal age”, “adult”, etc. For example, people can legally drive, have sex and reproduce before they can legally vote, or get married without their parents’ consent.

When I see media coverage that suggests that people in their 20s who have engaged in anti-social, irresponsible or unacceptable behaviour are “too young to know any better”, I can’t help thinking that these commentators are being too generous (or totally patronising). Some people in their 20s are responsible for making life-or-death decisions (first responders, emergency workers, police, medical staff, members of the military). Many more are in the workforce, fulfilling legal and contractual obligations on behalf of themselves and their employers. (And in some fields such as sport and entertainment, they get paid very handsomely to do so.)

Surely, we should treat people over the age of 18 as “responsible adults”. Likewise, we should really know the difference between “right and wrong” by the age of 8 or 9, and certainly by the time we start high school. But if, as some academics and social policy advocates suggest, “adults” don’t fully mature until they are in their mid-20s, perhaps we need to raise the minimum age for driving, marriage, consent and voting to at least 25!

Finally, on the issue of access to social media, I would argue that since the minimum age to enter into a legal contract is 18, and since a social media account is a form of contract (at the very least, it is a type of license?) then anyone under 18 needs to have their parents or legal guardians sign on their behalf to ensure compliance with the terms of use. Alternatively, underage users need to complete a test or undertake an assessment to demonstrate their understanding and competence to participate in these platforms.

Next week: “Megalopolis”? More like mega-flop it is!

 

AI hallucinations and the law

Several years ago, I blogged about the role of technology within the legal profession. One development I noted was the nascent use of AI to help test the merits of a case before it goes to trial, and to assess the likelihood of winning. Not only might this prevent potentially frivolous matters coming to trial, it would also reduce court time and legal costs.

More recently, there has been some caution (if not out and out scepticism) about the efficacy of using AI in support of legal research and case preparation. This current debate has been triggered by an academic paper from Stanford University that compared leading legal research tools (that claim to have been “enhanced” by AI) and ChatGPT. The results were sobering, with a staggering number of apparent “hallucinations” being generated, even by the specialist legal research tools. AI hallucinations are not unique to legal research tools; nor to the AI tools and the Large Language Model (LLMs) they are trained on, as Stanford has previously reported. While the academic paper is awaiting formal publication, there has been some to-and-fro between the research authors and at least one of the named legal tools. This latter rebuttal rightly points out that any AI tool (especially a legal research and professional practice platform) has to be fit for purpose, and trained on appropriate data.

Aside from the Stanford research, some lawyers have been found to have relied upon AI tools such as ChatGPT and Google Bard to draft their submissions, only to discover that the results have cited non-existent precedents and cases – including in at least one high-profile prosecution. The latest research suggests that not only do AI tools “imagine” fictitious case reports, they can also fail to spot “bad” law (e.g., cases that have been overturned, or laws that have been repealed), offer inappropriate advice, or provide inaccurate or incorrect legal interpretation.

What if AI hallucinations resulted in the generation of invidious content about a living person – which in many circumstances, would be deemed libel or slander? If a series of AI prompts give rise to libelous content, who would be held responsible? Can AI itself be sued for libel? (Of course, under common law, it is impossible to libel the dead, as only a living person can sue for libel.)

I found an interesting discussion of this topic here, which concludes that while AI tools such as ChatGPT may appear to have some degree of autonomy (depending on their programming and training), they certainly don’t have true agency and their output in itself cannot be regarded in the same way as other forms of speech or text when it comes to legal liabilities or protections. The article identified three groups of actors who might be deemed responsible for AI results: AI software developers (companies like OpenAI), content hosts (such as search engines), and publishers (authors, journalists, news networks). It concluded that of the three, publishers, authors and journalists face the most responsibility and accountability for their content, even if they claimed “AI said this was true”.

Interestingly, the above discussion referenced news from early 2023, that a mayor in Australia was planning to sue OpenAI (the owners of ChatGPT) for defamation unless they corrected the record about false claims made about him. Thankfully, OpenAI appear to have heeded of the letter of concern, and the mayor has since dropped his case (or, the false claim was simply over-written by a subsequent version of ChatGPT). However, the original Reuters link, above, which I sourced for this blog makes no mention of the subsequent discontinuation, either as a footnote or update – which just goes to show how complex it is to correct the record, since the reference to his initial claim is still valid (it happened), even though it did not proceed (he chose not to pursue it). Even actual criminal convictions can be deemed “spent” after a given period of time, such that they no longer appear on an individual’s criminal record. Whereas, someone found not guilty of a crime (or in the mayor’s case, falsely labelled with a conviction) cannot guarantee that references to the alleged events will be expunged from the internet, even with the evolution of the “right to be forgotten“.

Perhaps we’ll need to train AI tools to retrospectively correct or delete any false information about us; although conversely, AI is accelerating the proliferation of fake content – benign, humourous or malicious – thus setting the scene for the next blog in this series.

Next week: AI and Deep (and not so deep…) Fakes