Apr 11, 2023
Combining Collective Wisdom to Aggregate Information On Where AI/ML is Going
It’s wild out there in the world of AI and ML in 2023. You’ve got centralized Large Language Model (LLM) services like ChatGPT, Bing, Bard going at each other’s throats. You’ve got the likes of ChatGLM-6B, GPT4All and LLaMA threatening to tear down the whole business model of using LLM’s to build a destination site.
There’s a neat startup called Manifold Markets which I discoverd over a year ago through HackerNews. It’s a, “Prediction Market,” website which allows users to set up their own predictions on unknown events or properties, and allows other users to bet and try to win fake points based upon correct resolutions of said markets. So really it’s a gambling site, and you’re gambling for points, which you can buy, but you can’t cash out. Here’s one of the markets I put together just out of morbid curiosity:
A lot of the bets that people have set up on this site in relation to AI are what I feel, highly subjective in nature. What I mean by this is, users set up bets about the state of AI at some future date, but the threshold for how that bet would be resolved is highly up to interpretation. For example, let’s say someone’s bet was:
“Will AI Get Super Way Smarter By the End of the Year?”
What you find in the discussion threads of betting markets, is there is a lot of argumentation over the nuances of the definition of the marketplace itself. So - what does the market-maker mean by, “super,” or what do they mean by “smarter?” The strict definitions of these terms could completely change the threshold of how the bet resolves, thereby allowing the market maker to arbitrarily pick winners.
In my mind, this goes against the strict definition of what a, “prediction,” is in the sense of a demarcated, “novel prediction,” used in science, which means the value of even participating in something like that is pretty low. Personally, I’m hoping, perhaps misguidedly, to learn something that will actually occur in reality, in as much as that’s even possible. I’m looking to have a deeper, better range of knowledge and expertise on a topic than my peers, so that I can arguably do better at my career, life, hunting and gathering, romance, or what-have-you. Maybe I’m just projecting, but it seems to me that just engaging in back-and-fourth arguments and waiting for some mini-authority-figure to resolve a bet so that I can amass points in a game has far less value to me personally than being able to really delve into a subject and know a little bit more than I would just reading the news, or perusing social media and accepting what people say at face value. I would rather try to, “be smart,” than, “feel smart.”
So in my opinion, which I am sort of parroting from a wide variety of studies on prediction markets that I have read, the value of a, “prediction market,” is not really in making predictions, but in gaining subject matter expertise, which could hypothetically then be translated over into making novel predictions, improving research and so on. You really don’t know what you don’t know, and when you gamify what you don’t know, competing against others who also want to share information and win an information game in a battle of egos, you really can end up with a much more nuanced understanding of the world.
So this leaves us with the question, “How does one set up a better prediction market, vs. just a pissing contest?”
Well, Dan Schwarz and Lindsay Taylor at Google wrote about some findings from an internal prediction markets game here. Some key take aways:
- Precise forecasting can only work on precisely defined questions.
- Give your predictors feedback on their performance, to help them improve their forecast accuracy over time.
- Incentivize experts to make predictions on questions where they have insights.
Manifold has a handle on the performance feedback with all sorts of internal tools they offer. But as far as making prediction markets precisely defined, they are very free-form and so you end up seeing a lot of joke markets, like this one:
Not to contridict what Schwarz and Taylor write in the paper above, from what I have noticed playing on Manifold over the last year, is that markets which have a certain, “virality,” to them tend to gain more participants, which seems to improve market aggregation. For example, I had put together a market dealing with precisely where the Chang’e 5-T1 booster would crash land on the moon within a range of precision, using the Selenographic Coordinate System, but few people were interested in this topic, relative to something a bit more viral, such as, “Will a nuclear bomb detonate in 2023?”
People tend to get really interested in politics, wars, world-ending scenarios - you know, viral stuff. But more so, they tend to get interested in topics that they can relate to.
So if I put together a topic about AI, and make it validated by a third party, make it really empirical, like, “will this score on this leaderboard go past a certain point by the end of this year?” - it tends to get 2 or 3 or even no participants. But if I instead re-phrase the question to, “Will AI Gain Significantly More Common Sense By the End of the Year?” and within the market description, explain that I’m measuring, “Common Sense,” by a particular leaderboard, then I suddenly get a lot of participants.
So, that’s my, “hack,” for getting more engagement. Now, how do I incentivize experts to actually engage and comment? That I’m not as sure. There seems to be an interested niche of engineers on Manifold who do actively look to bet on AI-related technical topics, so some of that task deals with finding those individuals and engaging with them.
In terms of recruiting experts from outside of the platform, Manifold offers an incentive to market creators, which is to get paid in fake points if you help them recruit new users. So my thought is, I will write this blog post out, and post it in some places where AI experts might actually hang out, and hopefully they will participate in my markets, or perhaps create markets of their own. Maybe some will even click on the links I have provided below which include a referral code. I believe by clicking on a referral code, at least at the time of writing, those new users also get a small boost in their initial starting wallet.
Toward the objective of improving the quality of my markets, Manifold also allows users to pay each other Manifold bucks (called Manna, or $M) through links. So what I’ve started doing lately, is if anyone is extra persnickity and makes some good points in my market comments, points out flaws, I pay them a token amount of Manna bucks, maybe $M 50 or so, depending upon how good of a point they make, then I update the resolution criteria.
So anyway, that’s pretty much what this blog post was about–trying to recruit you, dear reader, to join me and become a degenerate gambler, but wearing monocles while we do so. I declare no conflicts of interest, I have no ownership interest in Manifold Markets (though I wish I did!), I’m just trying to gain a more realistic picture of the world.
Here are some of my AI-themed markets and referral links:
Bets Ending End of 2024
Link to Market: Evaluating Scientific Claims
Link to Market: Feel and React to Pain
Bets Ending End of 2023
Link to Market: Common Sense Judgements About What Happens Next
Link to Market: General Conceptual Skills
Link to Market: Meaning of Questions
Link to Market: Linguistic Temporal Understanding
Link to Market: Hallucinating Less
Link to Market: Tracking Changes in State
Link to Market: Communnity-Based Ethical Judgements
Link to Market: Egocentric Navigation
Destination Site Definition
The old school term for this in the 1990’s was, “web portal,” - basically, a service like Google, which aims to be your one-stop-shop for as much as it can be. Whereas to, “Google it,” might be a term which means search, Google as a company aims to cover as much of your whole life as possible with Youtube, Maps, Drive, Products, etc. Some speculate that OpenAI’s current strategy, at the time of writing this article, is to become a destination site, by pulling people into LLM’s as a service through the use of plugins, whereas a couple months ago, it was thought that OpenAI’s strategy may have been to be more of purely an API that would allow companies to construct their own LLM’s for various purposes. The idea of OpenAI potentially trying to be a destination site would hypothetically threaten a wider range of Google’s revenue sources. Conveniently,there’s a betting market for this hypothesis too!
Link to Market: ChatGPT as a Destination Site
Sign Up For My Email List
- To stay up to date on articles like this, sign up for my email list:
Community Building, Market Analysis, StrategyShare