This is a rarely updated list of projects that I would like to work on at some point or have thought about but never managed to realise. The hope is for people to read this and get in touch.
Explain the trend to “horizontal” modes of organization in companies that we see in some sectors, using a Shannon entropic graph model.
The idea is the following: Companies, or also networks of companies, can be organised in many ways that can be characterised as lying somewhere on a spectrum between “vertical” and “horizontal”. A “vertical” mode of organisation is characterised by very hierarchical information and supply channels. For example, everybody has their boss to report to and communication of information within the company happens through meetings of the bosses who then distribute information to their respective teams. In contrast, a “horizontal” mode of organisation is characterised by very non-hierarchical information and supply channels, in which most employees/actors are directly connected. Now, over the last century, we see a tendency to move from vertical to horizontal, at least in some sectors. Why is that? Is it possible to identify a principal driver of this change, ignoring the fact that this is, of course, many things coming together? The working hypothesis for this project is the following “theory”: Companies/Networks of companies are information-processing machines. A company requires a lot of direct communication between employees if there is a lot of uncertainty about the information that goes in, and it requires only very little direct communication between employees if there is very little uncertainty about the incoming information. Hence, vertical integration, with its very few communication channels, makes sense if there is little uncertainty in the incoming information, and horizontal integration makes sense if there is a lot of uncertainty. One can build a simple graph-theoretic model that models companies as information-processing machines and in which the above theory holds. The question is only whether the theory is correct. So what needs to be done is to get the model to a point where it is specific enough to make realistic predictions, and then identify an empirical test case and obtain the relevant data to check
Status: I have done quite a bit of thinking on this about two years ago, and done some writing up. I have stopped working on this when I failed to identify an empirical test for the model that I could get the data for, and it also started to seem as if really my model was way too simplistic for a real company.
“Orders of magnitude off” – A database of instances of completely getting the relevant quantity/scale for a problem wrong.
Somewhat inspired by the above question, I would love to gather, and maybe even make available as a website or something like that, instances in which people’s approach to a problem, be it politically, scientifically, or whatever, was – or is – formulated at a completely irrelevant scale to the actual problem, or completely misses the really relevant parameters. For example, somebody buying shoes from a CO2-friendly hipster store but producing much higher emissions through their diet. Or when Australia’s government built houses for Aborigines as a way of making up for the “Stolen Generations” (a story I’ve heard but not checked). Or the proportion of companies that work in a Silicon Valley spirit compared to those that still work like Ford in 1920.
How much of the text produced every day is actually read and is there something valuable one can do with this information?
Every day humans produce an immense amount of articles, texts, research papers, text. Now, firstly I assume that most of this output is almost never read (analogously to the “dark web” in the sense of almost all websites of the web never being visited). Secondly, it is quite clear that most of these texts contain nothing original whatsoever. For most texts and comments, a better formulation of what the author would like to say exists already. Yes, it is not a mystery at all why despite these two points things are the way they are, but in this project, I would like to (a) check my assumption above and (b) ask whether any insights gained from (a) can be used to organise our collective efforts a bit more “efficiently”, where it is as of now completely unclear to me what “efficiency” here should mean because it is unclear to me how one should meaningfully define a purpose of our collective production of texts. I guess by focussing on a narrow sphere, like for example the blogging sphere, the project could be freed of much of its vagueness.
Status: There is a nice heat map of the internet that shows that large chunks of the IPv4 addresses transfer actually no data, meaning that large portions of the internet are never visited. But one should be careful with this because this might be because there are simply no servers linked to these addresses. Those parts of the internet that are active seem to be quite active indeed and also it’s completely unclear what internet activity really has to do with the question above. So status is: Initial research.
Using large publication databases to extract interesting historical trends in literature/newspapers.
I would like to do a bit of big data magic with large historical text corpora, such as books in the commons or newspaper archives, pretty much for the sake of it. My initial idea was to track the development of narrational perspective over the last two centuries (“what is the ratio between books written from the first vs. the third perspective?”) but am now not sure anymore whether there are not easier to answer questions that also provide more insight.
Status: I’ve spent some time trying out different APIs, all mostly to my dissatisfaction, I have some code snippets but question-wise I’m completely open to suggestion.
Algebraic structures from operational primitives in Euclidean space.
Take as a working hypothesis that mathematics is the result of an abstractive process that started out with simple geometric shapes people faced in their everyday lives: So initially we formulated simple geometric concepts, such as “circle”, “square”, “line”, etc. to be able to complete everyday tasks such as building huts, building tools (the latter doesn’t necessitate the former, as ethnographic research shows). Then we generalised from these simple concepts (maybe driven by technological or other material developments, maybe not) and found that this lets us do interesting stuff (such as prove theorems) applicable well beyond anything that we ever encounter in our daily life (like come up with a theory of curved spacetime). Now, if this hypothesis was true, then I think it should be possible to derive some basic (in the sense of relatively non-abstract) mathematical structures starting only from the concrete surroundings people face in their everyday life. This is the aim of this project. In particular, I want to derive (part of) the zoo of algebraic structures (rings, lattices, boolean algebras, groups, etc.) just by starting with the kinds of relevant properties that everyday objects can have in normal space – shape (where is the boundary of an object?), consistency (plastic vs. elastic), location (where is an object situated?) – and asking what states in such objects I can produce by operationally modifying these properties (for example change an object’s location by moving it in space or produce a new object by breaking a plastic object in two). The reachable states are the elements of the algebras and products are inherited from the ordering that is induced by these operations.
Status: I have done quite a bit of mostly confused thinking about this over the past year, but required most of that time to clarify the aim as I write it here. If there is anything to gain here, then it should come out with not too much effort. At the same time, it should quickly become clear if there is nothing to gain.
Claim-and-Wonder – an App to help us lose our fear of making bold predictions.
I think we are too afraid of making false predictions, of being false about things more generally. This fear, I think, often leads to a somewhat timid way of dealing with the world: If your aim is to minimize the number of times you were wrong, your best strategy is to stop making predictions or having ideas altogether. I think this is can very harmful, to individuals and societies. If instead, we believe the basic Popperian tenet according to which (scientific) insight is driven by a series of bold conjectures and consequent refutations, our aim should not be to minimize the risk of being false about something, but instead maximize the chance of being right about something. Claim and Wonder is an app that encourages its users to make claims about everything (in the future) that they want. The condition, however, is that the claim has to be falsifiable. That is, it should be clear under what circumstances the claim turns out to be correct or false. Further, users can vote on other users’ claims and then get points for correct guesses, etc. I would hope that this has a number of nice effects: 1) Users cook up ideas about how the world develops in a playful setting, 2) Users learn how to formulate ideas precisely, 3) The data would allow to study how good large groups of people are at guessing the future correctly.
There is no clear business model for this, and it’s not clear that there should be one. Programming this kind of thing is not very demanding, and I think it would be worth in a completely non-profit manner. Currently, there is something called prediction market, where people can bet on other peoples’ claims.
Status: Programming this is very simple and I already have a toy version, that could be put up on a server as is. So getting this going on a technical level requires very little effort. The more difficult bit is to tackle the following problem: What should the claims be? There should be a lot of claims to keep people busy and engaged and it’s surprisingly difficult to come up with claims. So this would have to be solved by bots or automatically creating daily claims (Today’s weather, Today’s football stats, etc.)
Is this kind of collaboration model interesting enough to be scaled and turned into a website?
Well, in case I find that sharing my projects here leads to a significant increase in stimulation through (friends of) friends and/or their realisation, then the question poses itself whether this could not be an interesting format for tackling projects for others as well. Specifically, the ideas I write down here are medium-sized: They’re too involved to simply be asked as a question on Q&A forums like Quora or XX.Overflow, but not really major projects. They are things where a lot of progress can be made with only one or two Skype session or so. At the same time, they require some expertise at some level, so ideally are something that should be targeted by a small group of people with some previous knowledge about these things (for most projects, not all). So that suggests that ideally there should be a different constellation of 3-4 people working on each of these projects, rather than a fixed bunch of people going through all of them. Now, these characteristics – a) medium-size/too big for quora, b) too advanced to be answerable by a lay person – are shared, I assume, by the projects that a lot of people would like to work on, also potentially as a way to get to know like-minded people. Yet, I am not aware of a platform to facilitate exactly that. So maybe that would be an interesting thing to investigate and maybe kickstart.
A history of models of consciousness-book
Wouldn’t it be nice to have a big size book with images of historical documents displaying, graphically, models of the mind or models of consciousness? That’s it, that’s the idea. This project is not motivated by some complicated question but by the nice quality that Atlasses, big books that document historical documents, usually have. Think of Herschel’s atlas with hand-drawn images of plants and sea creatures. The problem is that I have no experience in publishing, and I also have not researched any pictures or texts. So my initial step here would be to contact some researchers on exactly this and ask them for material, or their willingness to participate. Does anybody here have a connection to suitable publishers?
Can we use Solomonoff Induction to judge the predictive capacities of machine learning and deep learning methods so that we can use it as a tool to aid their design?
I have very little understanding of Solmonoff Induction but this is a question that I would like to tackle in order to develop it. SI is a tool in algorithmic probability: Given a string $x$, it assigns a conditional probability to a second string $y$, P(y|x) that is given by the probability with which a randomly drawn input string fed into some Turing machine would output $y$ given that it has previously output $x$. This probability is a function of the Kolmogorov complexity of $y$ and $x$ (intuitively, low KC strings have much higher probability of being produced by a random instruction string). Now, one could compare and merit different Turing machines by how quickly the probabilities that they produce converge to the ones given by SI (Markus Mueller has proven that they all converge to the same, however, at different speeds). Now, doesn’t this mean that we can compare different ML and DL solutions at this “universal” level and ask: How well would they perform in this SI test? Would the results of such a comparison be of any interest, given that usually, the credo is that ML methods are very heuristic and that there is no universal, golden ML algorithm?
This is my latest Arduino-project, following the sound carpet and knockblock. The idea is to build Theremin-type antennas and put them into fruits or other (conductive) mundane everyday objects. In this way you can sense the distance of people in a room from those objects and can use that to trigger a kind of interactive installation, in which fruits react to you when you approach them (the title here alludes to the initial idea of Pears becoming increasingly panic and asking you not to touch them as you get closer). The basic motivation was to give mundane objects an additional layer and allude to things like the Internet of Things, object-oriented ontology, surveillance, etc. But there is no articulated idea yet, any ideas welcome.
Status: I’ve built a prototype and it works pretty well, a horseradish that completely freaks out whenever I get near…
A holographic blockchain
This is something that is both somewhat secretive and, as of Nov.17, pretty vague as an idea. I put it here mostly to have people that are interested in blockchain technology and that somehow got lost on this site to know that I’m working on this and that you should feel free to get in touch.