Thoughts on AI’s Trajectory
I want to begin thinking through what the ethics space is, what the AI space generally needs re: “ethics”, and the delta between the two; I also want to think through where the space is going — what ethics really means for the next ten years, given current tech trends. Then I wonder where we can fit ourselves within that gap as a market offering.
Current Trends:
Despite big tech the next ten years is decentralization-heavy — the anarcho-libertarianism that defined the internet. After decades of effort from the open-source community, I think a widespread mistrust of institutions plus a generation of kids raised on tech means we’re converging on a cultural critical mass. Open-source stuff is about to be cool. I wouldn’t be surprised if everything in the tech space was eventually replaced, to the extent possible, with the Rules Not Rulers mantra of Bitcoin — the absolute separation of X and State.
(Separation of Money and State, for example, is Bitcoin’s core value-add.) Open-source software is the most decentralized solution and control-resistant by definition; the ethics of open-source solutions is buried in their structures. Maybe there’s a worthwhile effort there. (Help create/recognize business opportunities for companies interested in being part of that transition? Bitcoin space is a good example of this.)
More to the point, I’m wondering how deeply this trend affects AI. What do we do once the two paths meet? Where do we find ourselves before that point?
Short answer: I think tech consulting will gradually shift over the next decade from design principles to higher-order ethics concerns, which will include ever-more nuanced policy questions. Re: higher-order concerns, I think emphasis will shift from AI research in ivory towers to real-time sandboxing — whoever builds a sandbox that lets us explore/simulate ethics situations wins, and whoever most creatively works that sandbox to find results gets the greatest ROI. A worthwhile goal to me is finding a systematic way of exploring people’s work that’s not research papers read by a few, for a few.
(Thinking of the right questions, and grappling with the “right” societal visions, eventually becomes the core value-add of ethics, and I wouldn’t be surprised if most consulting was, in a sense, “ethics consulting” someday. Perhaps our toolkit could be such a sandbox, for which a grant would be an extraordinary help. But I don’t quite know what that looks like yet.)
It seems to me inevitable that open-source AI will come about. What I mean by this is: eventually huge language models will get smaller in the same way mainframe computers became phones. In such a world, it will actually be meaningful to open-source the code, and for people to tinker with it. I imagine an open-source AI sitting on a future iteration of the Raspberry Pi, for example, that then exists on a social network that’s openly AI and Human, both through an app and an open-source metaverse. Who knows what the possibilities are? When pandora’s box is opened (as has been for many previous inventions before AI), it’s only a matter of time before it reaches Printing Press critical mass, and the churches can’t hold the Bible scrolls in just their libraries any longer. Getting certain design principles right, or being able to grapple with that kind of decentralization in a helpful manner, seems very important for us to facilitate while things are still too big to be passed around to tinkerers and amateurs.
Similarly, and most relevantly, we should expect to see ethics toolkits themselves become open-source. The value-add of a firm for the next 3-5 years (my personal bet) is in normalizing the language within teams and expediting implementation of their preferred toolkits/approaches. I imagine there will be regional (both topical and geographical) convergence on ethics subjects. That is to say, certain design decisions will be a given in certain parts of the world, if not universally. E.g., autonomous cars should always slam on the brakes regardless of someone’s conclusion on the trolley problem; people should be able to say whether they want to sell their data, and be allowed to follow it into whomever’s hands it lands; there will be a few generally accepted ways to deal with hiring algorithms, algorithms used to assess likelihood of convict reoffense, and so on. There’ll be clear and settled discourse re: where humans need to be in the loop in these processes, along with a transparent, layman rationale for e.g. applicants as to why and how the hiring process works on the backend, and so on. We’ll have open ethics libraries, along with ways other companies and teams have tended to solve the same problem (e.g. 60%, with this similar problem used X and Y model / design approach). That would actually be a very cool ethics toolkit which I don’t believe is out there. Finding a way to get companies (not just big ones, but nimble ones) eager to dialogue on design decisions is a whole feat in itself, and getting them to pay us to facilitate that inter-company dialogue is better yet.
Perhaps relatedly, outside of certain obvious decent conclusions we all want to draw (“cars should probably not decide who to run into and brakes should always be applied to avoid an accident”), there may be an ethical convergence between “right-wing” ethics and “left-wing” ethics. It’s not worth going into here, particularly since the only use-case I can think of off the bat is the relevance of DEI in things like hiring considerations (is it a worthwhile factor to consider or not?), but it will not surprise me if this becomes a divisive topic in the coming years, and companies market themselves on the nature of things such as their hiring process, or how they view consumers. (In the same way “Pro-Life vs Pro-Choice” both manage to sound great, maybe we’ll see a “Meritocracy vs Equity” kind of marketing division, in terms of how people apply algorithms.) This seems distinct from a technical consideration of equity, which is to say: When a black person uses a sink with a sensor, it should definitely run water for black hands as well as for white hands, and vice-versa.
So, this brings us to ethics at the moment.
Ethics
We’re still sorting out the distinction between principles and design decisions, the ideal toolkits for companies, the right language to use in ethics. We’re generally converging on utilitarianism as a proper way to grapple with AI Ethics, as opposed to say, Deontology (irrespective of consequence, the right thing to do is X). Certainly I’m the only person I’ve met who raises Nietzsche in an AI context, perhaps because he’s so resistant to systemic-level thinking. The problem is that he’s the true opposite of utilitarianism. Where utilitarianism considers the most benefit for the most people, Nietzsche would be interested in creating the most benefit (and, perhaps even more controversially, the most overcoming) for the few. You can see how this is dangerous. (He has plenty of hints at who those people might be, and how the world should be shaped around them.) — I’ve been quietly writing something on what Nietzschean AI would look like as an intellectual curiosity, but it’s far from the democratic instinct of utilitarianism.
Still, so many fundamentals to getting a tool right are fundamentally utilitarian: An axe should probably work for everyone, and chopsticks should be intuitive to hold even if you’re missing a finger. All this to say, we will emphasize the most benefit for the most people until we can’t anymore. As far as utilitarian morality goes, we still have to grapple with lots of common-sense questions (how do we trade off one harm versus another, or one mathematical approach over another, or how to mitigate a bad deployment when a company is personally invested in keeping the product alive?), and as long as we can properly articulate our value-add in answering those questions (ass-covering, new toolkits, better understanding of which data to collect, and so on), I think we’ve got something good going.
I’d like to do more than just ass-covering, but that requires that we come up with effective toolkits and ways to address grounded/gordian-knot-level ethics quandaries companies are dealing with. We have to know what to look for. There are tremendous ethics issues swirling everywhere in the space, but I see them most prominently in the space between society’s interests and a company’s interests, which are almost never in alignment. But it’s the company who pays us, and not society — so inevitably, we’ll be at their whim, even as we make contrarian or difficult statements, or notice problems we have to address. I haven’t thought of a helpful way to work on that yet, but I imagine that tension continuing to grow. Thinking optimistically, maybe a worsening tension will force the realignment of company interests someday.
…
In parallel, I’d love to have conversations about how to use gaming principles to create helpful simulations, or some related toolkits, to help companies address more difficult ethical issues. Whatever we articulate there should be submitted in a grant application. Still, it’s not clear to me what issues need that kind of sandbox. It’s on the tip of my tongue, but I don’t see the problem to be solved just yet.
So, that’s my 1.0 on the space right now. A friend and I also wrote a response to Professor Russell’s book, Human Compatible, which you can find on Medium if you’d like to read it.