(This is an edited version of my keynote talk from Adaconf 2025. When the video is published I will link it here.)

It’s a privilege to be invited here to deliver the opening keynote at this year’s Adaconf. I’d like to acknowledge that this talk was written on the unceded lands of the Bunurong people, and is being delivered on those of the Wurundjeri people, both of the Kulin nation, and I recognise the First Nations people of this land as our first scientists, environmentalists, teachers, and storytellers, who we could learn a lot from, if we choose to listen.

I am a reformed academic turned high school teacher turned data science education evangelist. Just so you know who you’re dealing with, when I first needed crutches, my son decorated them. Among other things, he wrote “Warning: May attack the education system if left unsupervised.” Underneath, he added “(Supervision may not be sufficient.)

Before we get stuck into this, think for a moment about what it means to be difficult.

Today’s conference is all about change. Human beings are funny about change. And, to be fair, change is hard. To quote Terry Pratchett:

“Change was right. Change was necessary. Masklin was all in favour of change. What he was dead against was things not staying the same.”

There are many problems with change:

  • Recognising change is needed. It’s very easy to become complacent and accept the status quo when, actually, there are far better alternatives.
  • Finding a good change to make. (and I say “a good change” rather than “the right change” because real problems rarely have one right solution. Or even any right solutions. There are solutions that make things better, and solutions that make things worse. Often the solutions make things both better and worse. It’s rarely clearcut!)
  • Figuring out how to make that change.
  • Creating the political/social will to make the change
  • EVALUATING the change

All of these steps are tricky, but the final step, evaluating change, is often poorly done, forgotten, or missing all together. Evaluating the pros and cons of a change after it’s done is an easy step to skip. After all, it’s much easier to have “Make the change” as a nice simple KPI, rather than “Improve the system,” which is messy, complicated, nuanced, and often difficult to measure. “Make the change” can be ticked off. Improving the system is an infinite goal.

The inertia of human beings is significant. The inertia of systems can be all but insurmountable.

Grace Hopper is often quoted as saying:

The most dangerous phrase in the English Language is “We’ve always done it that way.”’

So I want to talk about change, about people, about systems, what it takes to make progress, and why being difficult is so important.

I’m going to start with a story that really happened to me, that I think illustrates the human condition horrifyingly well.

In August I was in Perth for a week, to work with the extraordinary folks at Pawsey Supercomputing Research Centre. My last morning there, as I was getting ready to leave, the fire alarm went off at the hotel I was staying at. I assumed, as you do, that it was probably a false alarm, but I grabbed my coat and my room key and headed for the door just in case. Out in the hallway, the smoke door was closed, which seemed ominous. I had not seen it closed all week.

I looked around for the fire stairs, reflecting nervously that I should have identified them when I first arrived. How many people do that, though? I saw a couple clearly evacuating, and headed in their direction. Around me, the door to almost every room was ajar, with concerned faces peering through the cracks. It was a Saturday morning, no doubt many folks were attempting to have a lie in, but the alarm was LOUD.

The couple ahead of me on the stairs were anxious. One of them was already halfway down, calling out to her partner “I’m not waiting for you! Sorry! I’m outta here!”

We clattered down the stairs, but no-one seemed to be following us.

The couple ahead of me got in their car and left. I stood around, in cold drizzle, wondering what to do. The alarm was inaudible from outside, and there were no crowds gathering. No more people coming out of the stairwell.

The only way out to the street was through a garage under the hotel, so I nervously scuttled through the garage, eyeing the building overhead and wondering just how foolish I was being. I wound up standing outside on the footpath while a whole lot of nothing happened.

I still couldn’t hear the alarms. An older woman came out looking anxious, and some people came around the other side of the hotel – ah, I thought, perhaps people are leaving via a different door! But no, they greeted the woman warmly and went inside with her. More people followed, and they gathered in the lobby. Some sort of family reunion, perhaps. There was no sign of any staff, or anyone even remotely official. A couple of people came out and lit cigarettes, but that was situation normal, since there was no smoking allowed on hotel grounds.

I ventured into the lobby to find the same family gathered there chatting, while alarms still blared, so I went back outside, feeling extremely bewildered. Could anyone else even hear those alarms? Was this some really elaborate gaslighting prank?

Eventually, a fire truck drove up, with a smaller fire department vehicle behind it. “Finally!” I thought, relieved. At least someone is taking it seriously. Most of the fire fighters went inside – in their gear, but without helmets, which suggested there wasn’t evidence of anything severe.

I looked up at the hotel windows, to see if I could spot anything significant, and lo and beyond, there were families up there, INSIDE the hotel, with fire alarms still blaring, who were holding small children up to the windows so they could see the fire truck.

As I mused on the story this tells about the human race, it suddenly struck me: This is a climate change analogy. The building could have been on fire – for all we knew, it was – but people were just spectating at the fire truck. Or, in the case of climate change, at the extreme weather, at the bushfires, at the droughts, and the floods.

They were all assuming, I suppose, that someone would save them, or there would be enough time “later” to react, if reacting proved necessary.

When the time came, surely someone would fix it. Just like climate change.

Well, folks, the time has come. Someone has to fix it. And that someone is us.

By now the evidence is clear. Climate change is real. Catastrophically so. Trickle down economics is not real. Catastrophically so. Wealth inequality is real. Catastrophically so. Tech solutionism? Not real. Catastrophically so. The solutions to our worst problems are, for the most part, sociopolitical, not technological. Tech is easy. People are hard.

We understand the spread of disease to the extent that we know that clean indoor air is crucial to the avoidance and management of most respiratory pandemics, just as clean water is crucial to the avoidance and management of diseases such as Cholera and Typhoid.

We understand the physics of Climate Change to the extent that we know that we must cease all possible carbon emissions immediately, and reduce the CO2 in the atmosphere as fast as we can, to mitigate the disasters that are already upon us.

We understand the economics of poverty and homelessness to the extent that we know that simply giving people housing and income is more effective than any other strategy, and much cheaper than allowing homelessness and poverty to continue.

And we understand the psychology of education to the extent that we know that our current education system is actively destroying the problem solving, critical thinking, and lateral thinking abilities of our kids.

And yet we are taking effective action on precisely none of these issues.

We’ve let evidence and reason be overtaken by political and corporate concerns (though it’s often impossible to distinguish between the two in this age of state capture).

But here I am, talking politics, at a tech conference. Why would I do that?

Well, we have a related problem in the tech industry. Our – blindness? Indifference maybe? – to evidence and rationality is particularly problematic when we’re designing devices and systems that are increasingly dominant in our lives, and in society as a whole.

When I did my Computer Science degree, approximately in the Cretaceous period, I did a compulsory software engineering subject. In that subject we learned that comprehensive testing was an essential phase of the software lifecycle. In fact, it belongs throughout the lifecycle, but definitely BEFORE product release, contrary to the current “Move fast and break things” ethos.

Recently my family got its act together to get off Spotify and onto something more ethical. Something that pays artists more, that isn’t swamped with AI, and isn’t funding death and destruction. Our first try was Tidal. We used Tune My Music to copy our playlists across – and I would love to know how their system works, because there were some really odd errors. Sometimes it got the right song but the wrong artist, like “You Mustn’t Kick it Around” by Bobby Short instead of Anthony Warlow. But mostly it seemed to work.

Tidal itself, though, was bizarrely broken. And it wasn’t a phone/version issue, because it happened on three different phones covering iOS and two different versions of Android. It would randomly complain that it had encountered an error and just stop playing. It couldn’t reliably download songs unless you were only downloading one song at a time, AND you kept the app open in the foreground, and didn’t try to do anything else on your device at the same time. It just wasn’t usable. For an app that was first released in 2014, and gets talked up a lot, you have to wonder how much testing it has ever undergone.

Which is interesting, really. Because you can’t throw a stone on the musical internet without hitting a post about how great Tidal is for folks who really love music. Presumably it’s for folks who really love music but don’t care to actually listen to it? Someone like Terry Pratchett’s Patrician in this quote from Soul Music:

“In fact the kind of music he really liked was the kind that never got played. It ruined music, in his opinion, to torment it by involving it on dried skins, bits of dead cat, and lumps of metal hammered into wires and tubes. It ought to stay written down, on the page, in rows of little dots and crotchets all neatly caught between lines.”

Tidal has significant issues, but it’s hard to find people pushing back on it. When I contacted them saying we had had these problems on 3 different devices with 3 different operating system versions, Tidal replied with a form letter asking for details of the device and OS involved, which just raises the barrier to getting any kind of meaningful action, and left me giving up. And it’s not just Tidal. That’s a recent example for me, but tech that makes life a little bit harder, a little bit less reliable, a lot more annoying, is everywhere. We just put up with it. We accept a disturbingly high failure rate from our technology. We accept it as an inevitable part of our lives.

Giving up sometimes seems to be where the tech industry wants us. And I find it a bit bizarre.

A dear friend of mine was recently in hospital for surgery. A little while afterwards, she sent me this story via signal:

Did I rant at you about the fancy hospital bed I was in the other week?

It was so amazingly fancy. Air mattress that slightly changes how it’s inflated every 5-10 minutes so that the patient doesn’t end up with bed sores.

So fancy. Such a cool way of using tech.

UNFORTUNATELY

It means the air pump runs every 5 minutes

Which was loud enough to wake me up. Even ignoring the slight movement of the mattress.

But there’s a sleep mode!

....... But the nurses apologised that they could not really get it to work.
But they tried! 👍 And it did work!

... For an hour.

And then not again.

Cut to me, reading several hospital bed manuals at night, squatting down under the bed, trying to press multiple buttons to get it into different modes.

If you get a bed made before 2021, the sleep mode default is 3 hours. After then it’s 8 hours.
Mine was a slightly different model that... Was broken.

It’s ok, I know what to do. I’ll just... Turn it off at the switch!

Which deflates the mattress.

At 4am, I am walking the corridors, wondering how anyone at all is sleeping. (I know I’m a very light sleeper but... By God.)

I gave up, unplugged it, and slept on the uninflated mattress, waking up with pins and needles in my arms.

Up until that point, it was working great to prevent bedsores. I was up and about and not lying in bed!

So yeah. I’m likely going back to that hospital soon, and if I need to stay over, I’ll be bringing a camping mattress.

So not only does this piece of tech not work properly, the nurses at the hospital, and therefore, one assumes, management at the hospital, KNOW IT DOESN’T WORK PROPERLY. And yet the hospital still owns these beds. Still makes patients sleep on these beds. Still buys more of these beds. Because... well... we’ve just accepted that tech is a bit broken. But we seem to think that tech that is a bit broken, or, in some cases, A LOT broken, is better than no tech. I reckon my friend would have been better off with a simple, soft, low tech mattress. But she didn’t have that choice.

In the same way that not using Large Language Models seems not to be an option in most workplaces these days. And that whole industry is astonishing to me. Among the many, many ethical issues that surround it, if an individual came to the government saying “I have this amazing business idea, but I can’t make it lucrative without stealing” the government would say “that’s not a business idea, that’s a crime.” but when the tech industry does it, apparently it’s obvious that we have to let it happen. Because Progress.

But, you know what? I don’t believe we do!

Let’s talk user interfaces and usability for a moment. I bought my first car, a Daihatsu Charade, in... well... let’s just say a far distant year. It had a physical slider for selecting where the air came out, another slider for the strength of the fan, and physical, tangible buttons for just about everything. I could turn the fan on by touch. I could make it demist the windscreen by feel. I could keep looking at the road and successfully operate all of the controls in my car. Even the radio.

Now I own a ten year old Prius. It looks much fancier. Despite being ten years old, it still looks high tech. It doesn’t have those outdated buttons and dials. No. It has ridiculous controls that cannot be operated without looking at them, instead of at the road. Whose genius idea was that? The Transport Research Laboratory in the UK found that drivers who use built-in touchscreen technology whilst driving are more distracted than if they were driving under the influence of alcohol, drugs or using a handheld phone to call or text, taking up to 57% longer to react compared to drivers who were fully focused on the road. Which implies the need for a new crime on the books entitled “driving while under the influence of the car.”

While some car companies are switching back to tactile controls for at least basic functionality, overall the car industry has let image and style overtake functionality and safety. And newer cars are getting worse, with ever larger, shinier touch screens displaying increasingly complex interfaces.

And shiny over functional is also what we’re seeing from the Large Language Model hype machine that is OpenAI, Anthropic, Google, Microsoft, and many others. We assume that bigger, faster, shinier, more data, means better. The tech industry avoids objective, systematic evaluation like the plague, so when Sam Altman says the latest version of ChatGPT is a “PhD level” expert, the hype machine does not allow us to pause and ask “What does that mean?” and “how are you measuring that, exactly?”

Instead we get a wave of uncritically breathless reporting that boosts the hype, rather than questioning it.

The trouble is, we’re not terribly good at questioning things, in general. And that’s not really a big surprise.

We take 3 year olds who “why” us to distraction, and we turn them into obedient clones who are programmed to get the “right” answer, follow the known procedure, stand in the expected line. It’s nonsensical, really, to teach that complicated problems have a right answer at all. In school the right answer means the expected one. The one on the answer sheet. Where there’s a choice between a complex and nuanced answer that takes in multiple aspects of the problem, and the answer given in the textbook, we teach kids to believe the textbook. We reward them for thinking less, and conforming more.

Real problems don’t have right answers. They have solutions that have pros and cons. That help some people but harm others. That improve some parts of the problem, but make others worse. That need to be critically evaluated, rather than being checked against some arbitrary, godlike answer sheet.

What if we went back to the age of WHY? I don’t mean all regressing to 3 years of age, though, to be honest, it sometimes feels as though the tech billionaires and politicians currently running the place have done just that – it’s all “MINE!” and the stamping of petulant little feet – I mean asking difficult questions, and not being put off by unsatisfying answers.

What would happen to our projects if we all asked more difficult questions about the systems we build and the policies we put in place?

Questions like:

  • Why are we doing this?
  • What problem does it solve?
  • Who does it help?
  • Who does it harm?
  • Is there a better approach?
  • Is what we’re doing sustainable?
  • Is what we’re doing ethical?
  • What are the unintended consequences?

Note: Not “are there any unintended consequences?” There are always unintended consequences!

I have an interesting story about that. Some years ago they introduced self driving trucks on mine sites in WA. In this constrained environment, self driving trucks seemed a great option. Fewer accidents, more predictability, and that corporate favourite, lower costs! But it turned out there was an unexpected downside. It turns out that the trucks followed exactly the same path every time – predictable indeed! – and in doing so, they trashed the road. Driving exactly the same path every time created deep ruts in the road. Human drivers have enough variation to follow slightly different paths by default.

Once they randomised the code enough to vary the paths, the trucks were fine. And that’s an outcome that seems obvious once you hear it, but without any known precedent, is really hard to predict! Which is why I say there are always unexpected consequences – because if you’re doing something new, there will be edge cases, unexpected inputs, and weird outputs that would be almost impossible to predict, or even to simulate, unless you knew about them in advance . Which is why evaluating solutions once they’re in place – and long afterwards – is crucial, to understand the full implications of the changes you’ve made.

So back to our difficult questions. What if we asked those questions about this frenzied, hyperscaling approach to Large Language Models? Do chatbots provide sufficient benefit that they justify the ethical concerns? Perhaps most urgently, their horrendous contribution to greenhouse gases via their energy use. AI companies say that what Professor Emily Bender calls “racist piles of linear algebra” – which is terribly unfair, because I believe the algebra is actually NON linear – are going to solve climate change. This is tech solutionism run mad. Climate change is a socio political problem, not a technical one, and there’s zero evidence of chatbots solving socio political problems – although there is already a quite astounding amount of evidence of chatbots making socio political problems much, much worse.

And the grift gets even better – having created powerful cheating machines that break school and university assessment systems (which actually need to be broken, but that’s a different talk – or book!), the AI industry is now selling powerful anti-cheating machines that don’t actually work. Set a thief to catch a thief makes perfect sense, doesn’t it? Because thieves know all the little tricks, all the sneaky shortcuts, how thieves think. They can analyse the scene and judge the most likely entry points, or techniques used. I wouldn’t be surprised if it really works. For thieves. But setting a chatbot to catch an chatbot? That’s where the analogy falls down. Because chatbots don’t know anything. They don’t think anything. They cannot analyse, assess, judge, or understand. They just spit out stuff that sounds plausible.

There is no algorithmic, computational, reliable way to detect AI output. The one thing it’s really good at is looking plausible, and it turns out that it’s even more “computationally plausible” than it is humanly plausible. So while human beings might detect a certain wrongness from chatbot generated slop, that’s because human beings have fine judgement and reason. Well. Some of us do. I’m reserving judgement on the people running the AI industry.

We have yet to create a computer program that has fine judgement and reason. It’s difficult, after all, to design and build something to perform a task that we do not understand and cannot even define. The likelihood of intelligence, judgement, and reason arising spontaneously out of larger and larger AI models is rather like the actual likelihood of the infinite number of monkeys on an infinite number of typewriters writing a Shakespearean sonnet. It makes for a good line, but it’s much more likely that those monkeys will continue to produce random noise indefinitely. It’s magical thinking. It’s not real.

To produce rational, reasoning systems, we will need a much better understanding of rationality and reasoning than we presently seem rational enough, or reasonable enough, to achieve!

So we’re forced to use AI, forced to accept very broken technology, and told that climate change is either not real or not fixable. How do we fix it? We have to start by being difficult!

Among other things, we need to be difficult about data. We tend to take data as some kind of objective, almost holy truth. But the questions we choose to ask, the things we choose to measure, and the definitions and categories we create all have a significant, and very subjective, impact on the data we collect. So we need to ask difficult, inconvenient questions about data. What’s the range on that graph? Why doesn’t the y axis start at zero? What was the sample size for this survey, and how did you find your participants? Even relatively simple and straightforward values can be unexpectedly complex. For example, what is the definition of resting heart rate?

We think we know what a resting heart rate is, but like most real world problems, it turns out to be unexpectedly complex, and weirdly fuzzy round the edges! Sure, it’s heart rate when you’re resting – but how do you define resting? How long do you have to have been resting for before resting heart rate can be measured?

In high school I used to hate sports classes because we’d rush to change into our sports gear, hurtle down to the oval, panicking about being late, and then they’d have us measure our “resting” heart rate. Now there might be a tonne of definitions of resting heart rate, but I’d be astounded if that scenario fit into any of them!

Do you have a tendency to ask difficult questions at work? Are your difficult questions well received or brutally discouraged?

Because that’s what tends to happen, isn’t it? Difficult questions mean people have to think about things. Companies might have to slow down in the rush to product & profit. There might be more work to do. It might even be harder work. Which all adds up to less profit. No one wants to risk that.

But positive change requires difficult questions. It requires us to challenge the status quo. When someone tells us we have to use AI for “productivity reasons”, positive change needs us to ask why, and to say “where’s your proof?”

When someone tells us hyper scaling, climate destroying, thieving, exploitative large language models are the only path to progress, positive change demands we ask “why?”

When someone tells us we can afford hundreds of billions of dollars for American submarines but we can’t afford to make sure everyone has somewhere to live, enough to eat, and support for disabilities, positive change demands we ask “how can you justify that?”

You’ve probably heard the saying that the collective IQ of a mob is divided by the number of people in the mob. You can extend that to say that the collective IQ of a system or organisation is divided by its size. My husband calls large organisations like governments, universities, and large corporations BDGs: Big Dumb Giants. Do you work for a BDG? Has a BDG, Telstra, or a government department, maybe, made you tear your hair out recently?

They are about as easy to steer as a planet. And they are all about amassing power.

Unfortunately, as Emily Bender and Alex Hanna put it in The AI Con, mass automation tools “serve as a means of centralising power, amassing data, and generating profit, rather than providing technology that is socially beneficial.” So all this fabulous AI is not about democratising the internet, or computation, or democratising anything at all. It’s about centralising power. And if we want to decentralise power, and build anything socially beneficial, we’re going to have to push back. To be difficult.

The tech industry would have us believe that they are all about progress, innovation, and the benefit of humanity. It’s important to remember that this is, for the most part, marketing copy put out by companies who are, in fact, all about making money and accumulating power, regardless of whether it harms the climate, harms society, or harms individuals. What we forget – or are persuaded to ignore or disbelieve – is that we don’t have to settle for that. We can push back.

In “Utopia for Realists,”, Rutger Bregman wrote: “If we want to change the world we need to be unrealistic, unreasonable, and impossible.” In other words, we need to be difficult. Radical, even.

Now look, this is all a lot to think about, so I’d like to take a moment for a change of pace, to read you a fairy tale I wrote recently, called All Things to All People.

Once upon a time, there was an oligarchy. It wasn’t your usual kind of oligarchy. The oligarchs weren’t officially the rulers of the land. Instead it was a sneaky kind of oligarchy – though to call it sneaky is a bit weird, because for people being sneaky, they were incredibly loud. They just weren’t loud about being in charge. They were loud in a “Hey! Look! What’s That Over There?” kind of way.

The oligarchs didn’t think of themselves as oligarchs, of course. Oligarchs rarely do. Instead, they thought of themselves as benefactors. Saviours of mankind. (Though, in truth, the “man” part was more important than the “kind” part. It turns out that, to save mankind, you have to do some pretty terrible things.)

In their own minds, they were Creating a Glorious New Future! A future in which no one would need to work! Or, at least, no one would need to be paid for their work, which was almost the same thing. Actually, it was better, as long as you defined “better” as “better for the oligarchs”.
The oligarchs set about building a device. The device would write masterpieces the like of which the world had never seen. Or maybe it would be a friend, or even a therapist. Obviously it would save the world from Climate Change, and it would absolutely rescue humanity from the need to be creative or interesting in any way. Honestly, it was going to be incredible... as long as no-one asked the oligarchs what “incredible” meant exactly. It was going to be all things to all people, that much was clear.

Unfortunately, not defining exactly what The Device was for made it rather tricky to build. The oligarchs scooped up a tonne of funding because obviously everybody wanted an All Things to All People device, and they were terribly convincing. They built The Device and set it free with great pomp and ceremony. It could do anything, up to and including flying to the moon! Indeed, it practically had a PhD in flying to the moon! Sadly, it flew to the moon in much the same way as a bucket of sewage would. It didn’t get very high and it made a terrible, stinky mess when it hit the ground.

Undaunted – for they were Fearless, World-Saving men – they built a second device, feeding it everything they could find. They were so good at finding things to feed it that they even found things belonging to other people. But because they were building an All Things to All People Device, it was obviously fine for them to take the things without asking. Because The Device was essential to Saving Mankind! This new Actual Device was amazing. It passed all the tests, knew all of the things! It was just a teensy bit embarrassing when it turned out not to be able to solve all of the problems. It certainly solved some of the problems, which was pretty impressive, actually, as long as you were happy with the solution to 2+2 being “a kind of bug”. But everyone was jolly impressed, anyway, because the oligarchs told them to be, and they were terribly convincing.

Unfortunately, word got out that The Actual Device was drinking a huge amount of water, and burning up quite a lot of the world’s atmosphere, in order to do (or, in many cases, fail to do) these tricks, but the oligarchs pointed out that there was simply no other way to build the The Actual Device, and obviously we had to do that, and there was plenty more water and atmosphere where they came from! And they were terribly convincing.

Next, the oligarchs scooped up more money, took even more water, burned even more atmosphere, and built the Really Actual Device This Time, or Lucifer, as they affectionately named it. It seemed truly incredible, until word got out that Lucifer had been fed poor and vulnerable people, so that they could wade around in the rather horrifically grubby innards of the machine and remove the problematic bits. Of course, they didn’t necessarily make it out of Lucifer alive, or undamaged, but the oligarchs pointed out that there was simply no other way to build Lucifer, or his descendants, and obviously we had to do that. And they were terribly convincing.

Of course, this argument came as rather a surprise to people who were researching ways to build other devices that didn’t eat people, burn up the atmosphere, drink all the water, steal all the things, or scoop up all the money, but simply did one thing, really really well. To their shock, the world didn’t seem to mind the people being eaten, or anything else really, and so the devices continued to be built.

The trouble was, Lucifer still tended to do tricks that weren’t exactly the right trick. Some of the tricks looked kind of right if you looked at them from a certain angle. But some of them were really badly wrong. Plus, some people were actually using Lucifer in ways that the oligarchs encouraged, but also advised against in really tiny print, and they were getting hurt.

What’s more, the horribly ungrateful people who had had all their belongings stolen were starting to get quite cranky about it. So were the people who had no water to drink, and the people who were finding it hard to breathe.

The horrible, ungrateful people were so ridiculous that they even tried to apply laws to the oligarchy, even after the oligarchs had patiently explained that the laws shouldn’t apply to them, as they were Busy Saving the World, and therefore didn’t have time to waste worrying about trivial things like human rights, or environmental destruction.

The story doesn’t stop there. But perhaps we can choose our own ending...

And that’s it, really, isn’t it? We want to choose our own endings, choose our own paths, rather than simply accepting the future that billionaires seem to be driving us towards. A future of climate disasters, ever increasing income inequality, and the rule of money rather than the rule of law. If we want a different future, we have to be prepared to be difficult. To ask challenging questions. To say no sometimes. To suggest alternatives.

Being difficult means questioning things.

Fascism needs us to be afraid, but it also needs us to be accepting. We need to accept that poor people deserve to be poor and rich people deserve to be rich. That refugees are dangerous and don’t deserve asylum, that it’s too late to tackle climate change, that the economy – an imaginary construct – matters more than humanity, compassion, and sustainability. So being difficult means being brave, and standing up to bullies, racists, fascists, transphobes, homophobes, and the rest.

But, unexpectedly, being difficult also needs us to be compassionate and collaborative. To build systems, processes, and indeed societies that meet everyone’s needs, rather than the needs of a dominant few. Being collaborative is much more difficult than being competitive. It’s hard work, requiring patience, empathy, and understanding, rather than adversarial rage. It doesn’t mean we can’t feel rage, of course. In fact, as Andrew Denton said: “rage is an excellent fuel, but a terrible compass.”

Being difficult is the only path to change. We don’t get change by being obedient, accepting the status quo, and accepting “This is the way we have always done things” as a good reason to keep doing things that way. We get change by asking “why?”, asking “who benefits?”, asking “how can we do better?”. By being difficult!

The theme of today’s conference is change – so let’s stop and ask ourselves: what change do we want? Where do we want to go? How do we want our society to look? And how can we get there?

So go forth and be difficult. And let’s make some choices.

Dr Linda McIver pioneered authentic Data Science and Computational Science education with real impact for secondary students and founded the Australian Data Science Education Institute in 2018. Author of Raising Heretics: Teaching Kids to Change the World, Linda is an inspiring keynote speaker who has appeared on the ABC’s panel program Q&A, and regularly delivers engaging Professional Development for Primary, Secondary, and Tertiary Educators across all disciplines. A passionate educator, researcher and advocate for STEM, equity and inclusion, with a PhD in Computer Science Education and extensive teaching experience, Linda’s mission is to ensure that all Australian students have the opportunity to learn STEM and Data Science skills in the context of projects that empower them to solve problems and make a positive difference to the world. View all posts by Dr Linda McIver