Shae's Ramblings

Stuff I find slightly meaningful and close to achievable

This is a third blog post in a series (first post and second post) about my switch from Social-Democracy to Social-Liberalism.

The first post was about specific things I did not believe, to try and dispel misconceptions about why I chose to identify as “liberal”. I tried to glance over several topics — socialism, private property, democracy, meritocracy.

The second article was about markets and uncertainty: randomness and uncertainty are not always counterproductive, and sometimes improve overall efficiency.

This post is about centrism and political compromise. It explores my political position, as well as two different attitudes towards politics.

Why do centrists exist ?

It is tempting to see centrists as syncretic, chaotic individuals with piecemeal beliefs and inconsistent views. Some of these biases are correct, and I will begin by little journey with evidence that points in this direction.

Centrism as inconsistent nonsense

One of the most striking example of this is France's Emmanuel Macron, who said so much stuff we now have this hilarious tapestry of contradictory takes:

There is much to be debated about Macron, arguably this is him just constantly re-evaluating his position to match the center of the french electorate (which has been slowly drifting to the right — as did he). This will probably be the topic of another blog-post.

Centrism as a grift

One striking example of nonsensical centrism was Andrew Yang’s Campaign back in the 2022 US elections. His slogan was close to Macron’s — short but sweet:

Not right, not left, forward.

This might be appealing to many voters, and I am probably in that crowd — a priori — but I was very aware back then this speech was very much not the best bet in favor of progress oriented policies. I will not talk much about Yang, I do not know him much. The only noticeable thing is he supports Universal Basic Income, which is good.

Centrism as a political reality

The most “centrist” answer to this question — which is the one I like the most — is simply the “pragmatic” one (words that centrists often utter). This can be succinctly summed up in the following sentences:

At any point in time, parties and voters are roughly split into 50% of Right-Wing voters and 50% of Left-Wing voters.

It is probably inefficient and counterproductive to never attempt to bridge the gap between these large parts of the population. No matter what you think of each half, they are there, and they vote. You should factor their existence into your political model.

This disillusioned view is perhaps not convincing for every reader, and it also feels lackluster. It is useful as the “tip of the (metaphorical) iceberg” but I will attempt to dig deeper, so bear with me !

Centrism as a “Secret third thing”

It is widely accepted the concepts of “Left” and “Right” politics were born During the French Revolution. This concept refers to nothing more than the seating arrangements within the “Estates General”.

Quoting the Wikipedia article,

Those who sat on the left generally opposed the “Ancien Régime” and the Bourbon Monarchy and supported the Revolution, the creation of a democratic republic and the secularisation of society.

The left at the time were what we would now call Classical Radicals — a subcategory of Liberalism that would arguably be centrist-coded in 2023 France, ironically.

If we follow the Wikipedia definitions, leftism is vaguely coded as follows:

Left-wing politics describes the range of political ideologies that support and seek to achieve social equality and egalitarianism, often in opposition to social hierarchy as a whole

While Right-Wing politics is broadly typified as follows:

Right-wing politics describes the range of political ideologies that view certain social orders and hierarchies as inevitable, natural, normal, or desirable, typically supporting this position based on natural law, economics, authority, property or tradition.

What “Left” and “Right” mean is often very loose and subject to interpretation. I especially like this vague definition of Left-Wing politics, because it reminds me I used to be more left, and that it sometimes is correct to be.

What is centrism then ?

Do they oppose social equality ? They’re not strongly against it, but they’re not ardent proponents, that much is true.

Do they support social orders and hierarchies ? I don’t think so, they would certainly wonder why completely changing the social order and abolishing most hierarchies is needed though.

Perhaps more confusing, I have seen people left-of-center justifying some kinds of hierarchies more than centrists, and some people right-of-center arguing for more egalitarianism than centrists. This isn’t helpful.

Yet, and I must confess I relate to this, Centrism is often where you end up when you disagree with everyone. Suppose, for the sake of the argument, that you are a left-liberal in today’s France. Hell, go back to the 60’s or 70’s, the point stands. You then ask yourself the unfathomable question:

Whom, in France, is Liberal ?

The short answer is: No one. The right-wing is plagued by Gaullism, a syncretic political tradition whose number one principle is having a “strong state”. You can check the wiki, it’s the top 1 defining characteristic. The left, aside from a 10% Parti Socialiste minority led by Michel Rocard in the eighties, was also Gaullist, or Marxist when it wasn’t. The two ideologies share a number of things, and the Gaullist-Marxist post-war alliance shows they can cooperate on many issues — they sometimes trust each other more than the “liberal” factions from the left and right.

Centrists are hard to classify, but that’s mainly because they’re not exactly homogeneous. It’s a bunch of people without a (political) home. They side with different people at different times, but they never endorse, or often only temporarily.

Centrists are definitely a very diverse bunch, so it makes sense to look at the many faces of centrism.

Centrists are — sometimes — unorthodox Utopians

In light of the above remarks, it strikes me as obvious that most centrists don’t exactly oppose both Left and Right politics, they usually mean to emphasize that the most important metrics do not always align with the Left’s and the Right’s values. Each has their own metric, but a centrist would very often bring up one metric — say “Wage Increases” or “Scientific and Technological Progress” — and argue this metric trumps others, and doesn’t really fall into either the Right or Left box.

A few folks belong to this box, including the fine people at the Work In Progress Magazine or the Institute for Progress. I personally think they're fine people, and, as a recovering scientist and STEM graduate, I must attest that scientific progress is indeed not (always) political and it should probably remain so.

Centrists are — sometimes — die-hard empiricists

The term “pragmatism” has been overused by grifters, so it is unhelpful to use it at the moment, but there is one ideology that many centrists and moderates appreciate, arguably more than the Left and Right, and it is Empiricsm

The definition can be found on the first section of the Wikipedia page:

Empiricism in the philosophy of science emphasizes evidence, especially as discovered in experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world rather than resting solely on a priori reasoning, intuition, or revelation.

You’ll hear many buzzwords go around, but those to watch are — in my experience :

  • Data-Driven policy — Good policies should have robust empirical evidence backing its potential upsides
  • Evidence-based policy — which is a synonym for the previous notion
  • Results-oriented — politics that cares about results, not ideas or ideologies
  • Outcomes-oriented — which means roughly the same as above

These things are not exclusively centrist — hell, I hope every party embraces them. But you’ll mostly hear centrists, center-left and center-right politicians and voters using these words. Very often though, they’re just making stuff up, and aren’t much more evidence-based than the other parties.

This last phenomenon is somewhat connected to Techno-Populism, the notion that the idea of “being competent” and “being evidence-based” are just vibes in themselves, and some parties run on these vibes, without a commitment towards remaining evidence-based after the elections.

About being evidence-based, John Maynard Keynes, in his Bayesian glory, famously said:

When my information changes, I alter my conclusions. What do you do, sir?

This — I hope — feels correct to almost everyone. Accepting reality and changing your mind based on data is quite good, in my opinion. Some even go as far as saying having strong idealistic views is counterproductive (I briefly wrote about Ideal Theory in my first blog-post).

So centrists sometimes make interesting points. Now why would you think “centrism” sucks ? Let’s look at it.

The case against Centrism ?

Centrism is often plagued with dumb people, and I quite often complain about french centrists on my twitter account. Centrism obviously sucks in that regard, because centrism is just the home of those who have no homes. You have to share this home with people you think are silly. You might have nailed down the art of being centrist and correct — but you are sitting right next to someone who is centrists for mainly incorrect reasons.

This is true for Left-ism and Right-ism though, people getting to the “good” results with incorrect derivations is very common. Let us look at two main criticisms of “centrism”, there are too many to list, this is just the two I hear most often.

Centrism as bad political strategy

One quote that made me more comfortable with political extremism is also from Keynes, and it goes

When the final result is expected to be a compromise, it is often prudent to start from an extreme position.

This strikes me as a fair point. You picture politics are a “big average” of every citizens opinion

$$\mathrm{Politics} = \dfrac{A + B + C + \dots }{N}$$

And therefore, if you aim to push the political consensus to either the right or the left, it makes sense to go further left/right than what you truly believe. It is a basic bargaining strategy.

Two notions help shed light and bring some nuance though: Keynes said “start from an extreme position”. It says nothing about your ability to compromise and shift more or less quickly towards the center.

For the second notion, let me digress on Game Theory a bit.

In one of the most cited papers in empirical game theory, Axelrod lays out the results of a large scale competition between different tactics. The tournament was re-run again a decade later. In both case, the winner was a strategy called “Tit-for-Tat”, which goes:

Tit-for-Tat always starts by cooperating [with the opponent] and when the opponent defects, it thereafter does what the other player did on the previous move. If the opponent cooperates, the program also does next round, and vice-versa.

This is how the first tournament turn out, you can find the full file online as JSON but also in other formats (click these links to download them from my GCS — CSV and Parquet )

Made with Flourish

The IPD (Iterated Prisoner's Dilemna) is obviously too simplistic to accurately model modern politics, but it is interesting that the best performing strategies all share two common components — the first is called “niceness” by the author

A decision rule is “nice” if it will not be the first to defect, or if at least it will not be the first to defect before the last few moves

the second is “forgiveness”, described as follows

Forgiveness of a rule is its propensity to cooperate in the moves after the other player has defected.

In short: assume your adversaries or other “players” / “voters” / “parties” want to cooperate — and if they betray you, forgive them soon enough. Depending on the country, party, and person, this strikes me as a very centrist principle. Macron famously said “there are good ideas on both sides”. Cooperation has some merit.

Heres what the most famous interactive web game on IPDs says about Tit-for-Tat:

The Golden Rule, reciprocal altruism, tit for tat, or... live and let live. That's why “peace” could emerge in the trenches of World War I: when you're forced to play the same game with the same specific people (not just the same generic “enemy”) over and over again — [this strategy] doesn't just win the battle, it wins the war.

By replicating this at the meta-level, across tournaments, the game ends up disclosing this

“Copycat [Tit-for-Tat] inherits the Earth”, crazy stuff ! In the rest of the game, it shows one strategy that is more effective is Copykitten [Tit-for-two-Tats] which is an even more forgiving version of Tit-for-Tat.

As the game highlights, this all ceases to be true when the game becomes zero-sum, but thankfully reality is not zero-sum !

All-in-all I think Keynes’ statement is correct, but parties which strongly remain far from the center and most importantly refuse to cooperate and compromise on their goals — these are most certainly incorrect in my opinion. Every good voter should have a kernel of “extremism” and a kernel of “centrism” within them, channeling the appropriate one when they feel it’s best.

Centrism as “Status Quo” defense

I have so far seen a few people say centrists are those who “want everything to remain as it is”. They contrast this with conservatives which — in these people’s view — want to return to a previous moment in history, a state in which they believe things were better.

There are many reasons why this could be incorrect, but first let me start by assessing this claim about conservatives. I do not like conservatism, at all. At its root (even though I distrust people who over-interpret words), conservatism is about “preserving” things that are, not exactly “returning” to another state of affairs.

Perhaps more confusing, some of the most anti-stagnation, pro-growth, pro-technology, pro-innovation people I know are “Right-Wingers”. They’re not all conservatives, but they (the ones I know) just believe welfare and pro-social policies aren’t very efficient at reaching their goals and are not so important. Those are mostly center-right voters though.

Centrism could be seen, in the view I’m discussing, as either:

  • a very mild form of conservatism — accept change but slow it down as much as necessary,
  • a very mild form of progressiveness — accept change but make sure it is not too fast.

Macron was famously elected because he kept hammering down that the country needed to change. He kept talking about “reforms” (to do what ?), and blamed the entirety of the french electorate, here’s an excerpt from the BBC

In his speech, he reiterated his admiration for the Danish “flexicurity” model, which combines a flexible labour market with generous welfare benefits.

“What is possible is linked to a culture, a people marked by their own history. These Lutheran [Danish] people, who have lived through the transformations of recent years, are not exactly Gauls who are resistant to change,” he said.

Macron loves change. He loves change so much he always changes his mind. He loves change a little bit too much perhaps. So do centrists want change, or do they oppose it ?

There is a quote that illustrates what the “centrists do not want change” people might think. It comes from a book called “The Leopard [Il Gattopardo]” written by Italian author Giuseppe Tomasi di Lampedusa. I will cite the italian quote, then its translation:

“Se vogliamo che tutto rimanga com'è bisogna che tutto cambi.”

“If we want everything to stay as it is, everything has to change.”

— Tancredi

This is a classic trick: make everything shift around, lose people with acrobatics and numerous changes and “reforms”, only to end up in a state that somewhat mirrors the pre-existing conditions.

This is very much not what happens though. Macron’s reforms have noticeable impacts — not necessarily in a good way. Maybe in a more long-termist view we will see a neutral effect, but this remains unlikely.

The US Social-Liberal Globalists, which I support (I am a recurring donator), have clearly stated they believe in fairly radical reforms. They just happen to believe in reforms which are not always “left coded” — through they still side with the Democrats (they even made it official).

My personal take on centrism, and center-left politics, is more charitable. It roughly goes:

Centrism does not preclude you from wanting radical change. You can even believe in socialism and yet be centrist. Centrism acts on the “time axis” — i.e. how quick do you think change should happen.

Centrists are most likely to believe change is good, but that change should be well thought-out. Change should last, and help building even more change — after ensuring previous change was beneficial.

Above all, centrists tend to acknowledge change is confusing, and we should help the people who aren’t “anti-progress” but have their own doubts and misconceptions.

Do not mistake me for someone I’m not though, I still firmly believe in Planck’s Principle. Society changes for the better when the people who are wrong cease to exist.

“Centrism” matters as a transitory state, to secure positive outcomes when the wrong people are still there, when we are not sure who is right and who is wrong, or simply when we want a little bit more evidence before claiming that something is indeed “right”.

Last words

So many things can be said about centrists. I have probably only talked about a third of what I think centrism is about. Centrists are also political pluralists, truth seekers, undecided nobodies, and many more things.

This was only a recollection of things, and as I write these last words I cannot help but feel I have written too much on the topic. I might write a more “interactive” version of this, one that is not constrained by the blog format, to let people play with the different kinds of centrism.

Don’t hesitate to reach me at one of my socials if you want to give me pointers, suggest changes, new topics, or point out typos !

This is a second blog post in a series of posts about my switch from Social-Democracy to Social-Liberalism.

The last post was about specific things I did not believe, to try and dispel misconceptions about why I chose to identify as “liberal”. I tried to glance over several topics — socialism, private property, democracy, meritocracy.

Now comes another article, which plays a big role in my switch: the realization markets are not inefficient, and randomness and uncertainty are not always counterproductive.

The roots of anti-market beliefs

There are several attempts to map anti-market sentiments, and there is one group of people that always stands out in my experience. These are the STEM graduates — the engineers, mathematicians, physicists — who argue econometrics and macro-ecomomics is “baby science” or even “fake science”, and any serious scientist should reject economic research because it is full of unserious grifters.

One such person I’ve found to hold this belief is Rémi Louf. This man has a degree from ENS ULM, a STEM research school that is arguably more selective than MIT. To add to this, he also happens to have a double degree from Oxford, which isn’t very surprising, because once you’ve done well at ENS, Oxford should be a piece of cake.

Not to say Rémi is wrong here — I barely know him. He’s very smart, is a leading contributor to critical projects surrounding LLMs and Bayesian computation, and deserves recognition.

I can’t blame him — I used to be like him, in fact. And this post explains what made me understand it was wrong.

“God does not play dice”

This is a sentence Einstein said to Niels Bohr. The original sentence goes (in german)

Jedenfalls bin ich überzeugt, daß der nicht würfelt

— “I, at any rate, am convinced god does not play dice”

This was his opinion on the famous Copenhagen Interpretation of Quantum Mechanics — which was championed by Niels Bohr as well as others. You have to put this in the context of 20th century theoretical physics. The most powerful theory then was built upon Maxwell’s equations, and the entire predictions from the theory could be obtained from axioms such as

$$ \begin{cases} \nabla\cdot\mathbf{E} = \dfrac{\rho}{\varepsilon_0}&\\ \nabla\cdot\mathbf{B} = 0& \end{cases}$$

You have to understand these quantities here, \(\mathbf{E},\mathbf{B}\) are profoundly deterministic quantities. They describe the vectors fields — magnetic and electric — which allows you to know precisely how any electric charge would be impacted at any point of space. This is 100% deterministic — 0 uncertainty here.

Newton’s Physics are similar — you have position \(\mathbf{x}\), velocity \(\dot{\mathbf{x}}\) and acceleration \(\ddot{\mathbf{x}}\) — and so forth. Any manipulation you have on these objects are deterministic functions of the input.

The danish kids were claiming no such thing though — sadly for our traditionalists. They argued the correct unit for modelling was a \(\Psi\) — a complex-valued function — which not only did not make any direct sense, but whose only intepretation was through \(\Psi^\dagger\Psi = \lvert\Psi\rvert^2\) which was a so-called density function.

Niels Bohr Biographical Profile

Roughly, this means that a system does not act on quantities which are deterministic, but merely change the probability that the system falls in a given state. This was the death of determinism, all in favor of efficiency and predictive power — you could call that “the first instance of neoliberalism in theoretical physics”.

Cellular networks are literally neoliberalism

Another huge blow to the determistic tradition in science is none other than the very reason you’re able to read this article. Claude Shannon’s General Theory of Communication rests upon one key revolutionary idea: the fact that we have to accept things are not 100% sure, and some data will be lost.

Black and white photo of Claude Shannon in front of a computer

Shannon’s theory, which he set out to build at AT&T’s Bell Labs (Horror — a private company 😱) can be summed up as follows:

Suppose we accept that we will fail and lose data packets at a non-zero rate. Call that rate R. Can we design systems which are preconditioned on the idea that real-life conditions are probabilistic in nature, and thus affect this error probability, but design them in a way in which R goes to zero as we increase some design components ?

This framework — which assumes you cannot understand exactly what goes on but only aim to quantify statistical properties of the transmission channel, turns out to be much, much better than deterministic theories.

This is perhaps another major success of “scientific neoliberalism” — it reinforces accepting uncertainty and “irrationality”, not having everything under control, can actually lead to better outcomes.

The way Shannon designed his theory is exactly how I think about markets. Markets are huge bayesian networks which constantly update their marginal likelihoods in reaction to real-time events. It is a pure stochastic machine. And it works better than the alternative.

Even the soviets agree markets are better !

One of the most funny moments in scientific history is arguably Kantorovich’s major contributions to Optimal Transport Theory. To give a bit of context, Optimal Transport is what the Soviets used to do economic planning. It is a series of mathematical tools to distribute goods from a source domain \(\mathcal{X}\) (say, your factories), to a target domain \(\mathcal{Y}\) — the consumers. The theory assumes moving one point from source position \(x\) to a target position of \(y\) incurs a cost of \(c(x,y)\) which is a real number.

The so-called “Kantorovich formulation” goes as follows: assume you have fixed marginals \(\mu\,,\,\nu\) which you cannot change. These are the locations of your factories over the country (you can't change that), and the houses in which people live (you cannot change this either) or their household wealth. What you can play with is what Kantorovich calls a “coupling” \(\gamma\in\Gamma(\mathcal{X}, \mathcal{Y})\), which is just a joint-density, or a “probabilistic mapping” from source to target.

Gabriel Peyré on X: "Oldies but goldies: L. Kantorovich, On translocation  of masses, 1942. "Nobel" Prize in economic 75 for optimal transport as a  convex linear program. Makes the transport problem formulated

Anyway, the Kantorovich Formulation is nothing groundbreaking. It says the objective is to minimize the overall cost of moving all the sources to targets, i.e. the integral problem of minimizing


over the set of all potential couplings \(\gamma\in\Gamma(\mathcal{X}, \mathcal{Y}) \). What was groundbreaking though, is the dual formulation he proved must hold for the solution to be optimal. He proved it is

$$\sup \left\{ \int_\mathcal{X} \varphi (x) \, \mathrm{d} \mu (x) + \int_\mathcal{Y} \psi (y) \, \mathrm{d} \nu (y)\right\}$$

Where the sole condition on the mappings is \(\varphi (x) + \psi (y) \leq c(x, y)\).

Others will explain this better than I do, but above \(\varphi (x)\) represents supply and \(\psi (y)\) represents demand, and the fact \(\gamma(x,y) = \varphi (x) + \psi (y) \) suggests the optimal system must separate demand management from supply management. It essentially implies there shouldn't be a single actor doing both functions, which means planning is strictly inferior at solving allocation problems.

This, which is probably one of the strongest theoretical pieces of evidence in favor of markets, was proven by a literal soviet mathematician trying to make central planning more efficient. What a splendid victory of neoliberalism once again.

What about the future ? Will neoliberalism stop ?

Well, I have bad news. Have you heard about the Artificial Intelligence revolution ? All these humans doing worse than machines at increasingly many tasks ? Well all the algorithms which these machines use are inherently probabilistic. None of them assumes determinism. In fact, the entire theory is built around randomness.

Arguably the entire field of Big Data and Machine Learning is about leveraging the Law of Large Numbers. It is neoliberalism on steroids, I’m afraid. Even more so because private companies own most of the compute and concentrate most of the skills within the industry.

There has never been a time in history where believing stochastic models are inferior could be more wrong. And perhaps economics as a science was visionary in embracing this early.

The teachings of “scientific neoliberalism”

The people who argue markets are random and uncertain, and because they are it follows they are less efficient at solving allocation problems — these people are most certainly fools. I know because I was one.

I wanted to believe that deterministic rules do better than random systems. It is wrong. I wanted to believe clear-cut regulation — banning stuff, price controls, nationalization, etc — would obviously yield better outcomes.

But it couldn’t be more wrong. And this article is just the surface. The more you learn, the more you realize it is pure stupidity.

After a very nice discussion with Erik Engheim, who is a proponent of Nordic-style Social-Democracy, I decided that I would write down a couple of things about my politics, but also about my history and self.

As some of you may know, I used to identify as a Social-Democrat.

This was from 2014-2018, I used to be respectively 17 years old and changed my mind around my 20th-21st birthday. First and foremost, let me clarify: calling yourself “liberal”, or even “social-liberal” is far from politically neutral in France.

Most people from the center to the center-left, hell, even right-wingers raise an eyebrow when you tell them you think liberalism is cool. In my own current party, very good people I appreciate have a little freeze when you tell them you are social-liberal.

In spite of this, I still like to call myself this because of a few reasons:

  • I like honesty — a lot — and I like stating things clearly
  • Markets & Liberalism are mostly unfairly criticized in 🇫🇷 – and I appreciate fairness and accuracy
  • I like empiricism and data-driven design, and this is an easy way of filtering out people who do not wish to engage with either your ideas or the facts on the ground
  • I genuinely believe France needs to be more liberal, not less – even if my ideas are often close to those of Nordic Model advocates

This series of posts will be dedicated to explaining why I hold some of those beliefs. This first post is dedicated to what I do not believe. This might help some make sense of what I share and what I do not share with others.

Most subtitles are phrased in a negative way : “X is not Y”. This doesn’t mean I believe the opposite (“X is Y”), this is just stylistic repetition — each sentence means that I do not think “X is Y”.

Socialism isn’t evil – it is just unconvincing

When I read some pages about some less well-known strands of socialism, such as Fabian Socialism, Liberal Socialism or several versions of Market Socialism, I tend to agree with a fair bit of their ideas.

I have found very inspiring and smart people defending these men and women, and even the most staunch anti-communist should ask himself whether the ideologies that allowed the London School of Economics to exist, Lee Kuan Yew’s Singaporean Miracle or the policies championed by New Labour are something to abhor.

The main reason Socialism feels so “neutral” or “uninteresting” to me is probably linked to my ambivalence towards what philosophers call “Ideal Theory”. To quote a few lines from the article I linked:

The idea that a vision of an ideal society can serve as a moral and strategic star to steer by is both intuitive and appealing. But it turns out to be wrong. This sort of political ideal actually can’t help us find our way through the thicket of real-world politics into the clearing of justice.

The thesis is that not having a “Grand plan” and trying to stick to it is actually what leads to “utopia”, and trying to reproduce mental models of what ought to be often backfires.

Inequality isn’t a “non-issue”

Inequality is very much important, and even the staunchest free-marketeers will concede that the economic fabric of human societies break down when Gini is high enough. This is precisely the thought process behind Sammuel Hammond’s “free-market welfare state”. Sam is not very progressive, nor is he social-liberal. He just has half a brain.

Even Hayekians will agree information asymmetry is an issue, though they backtrack when given any concrete intervention to equalize things. Some will probably invoke Coase’s Theorem to argue inequalities do not preclude from achieving the best outcomes — but this is so idealistic it might very well be called “its own kind of Socialism” — if one identifies socialism with a high dose of idealism.

As Keynes himself said,

The long run is a misleading guide to current affairs.

In the long run, we are all dead.

In light of what was said about Ideal theory above, let me state two of my beliefs, which I think will make everything clearer:

When — and if — I try to envision the ideal society, far into the distant future, I tend to think of a society that is much more equalitarian than the one which we currently have

However, I also strongly believe the following

It is impossible to foretell the pace at which humanity will progress towards such egalitarian societies, and it is highly unrealistic to envision and enforce a steady, always decreasing, downward trend in inequality — hiccups and fluctuations are inevitable and even perhaps desirable

In other words: lets keep things incremental, and focus on short to medium term political goals. It is fine to see it as a grand quest to prove “socialism works” over time, it is fine to see it as “pragmatism” — please just roll up your sleeves and do the work.

People and voters aren’t stupid

Perhaps what might seem odd to readers is the following:

It is because I believe ordinary people should have more control over their lives and get to weigh more on policy and economic outcomes that I turned away from French Social-Democracy

French “Social-Democrats” are a weird bunch. They’re extremely competent — most top positions require a PhD at one of the most selective universities on Earth for public management, the french “École nationale d'administration”, and while french Parti Socialiste used to have broad popular support, the french 5th Republic is very much a “choose your flavor of technocrat every 5 years” type system.

This is very much wrong in my opinion. Not only because french technocrats are not exactly the best at getting things done, but because ordinary citizens — be it investors, artists, engineers, activists, educators, researchers — have proven to be as important if not more important. Statesmen should remain humble. They are supposed to empower people, to help them make sense of trade-offs and tools — they shouldn’t suppress society’s voice.

Individual Freedom isn’t incompatible with solidarity

If you listen to lots of self-proclaimed “progressives” and “socialists” in France, you will get the impression there is no way to reconcile individual freedom with solidarity and social justice: most policies should be designed within this framework — restraining individual freedom is simply a necessary component of increasing equality and welfare.

To this, I will simply quote Keynes — which should serve as the basis of most socialist ideologies in my opinion — whom famously said:

The political problem of mankind is to combine three things:

Economic efficiency, Social justice, and individual liberty.

Let me be clear: individual liberty is non-negotiable. If your design makes it impossible to push for the two, you essentially are justifying inequality in my eyes and you will lose — and must lose — because individual liberty is vital.

It is up to the left to find a way to advocate for both. I will encourage them to do so when possible, but I will oppose and vote against any initiative that tries to promote authoritarianism as a tool to reach equality.

The “Revolution” — Mob violence — isn’t democratic

A lot of people on the left in France, perhaps as a consequence of the French Revolution of 1789, seem to believe riots, revolutions, popular uprisings and other kinds of mob violence are profoundly progressive and democratic tools — legitimate ways for the people to express themselves and push for societal progress.

Why did the french revolution turn into a cultural glorification of popular uprisings and not into a deep-seated respect for Liberal Radicalism ? I do not know. Yet here we are, and my ideology reflects this in parts.

My bias is opposite: I was raised by an artist who equated such things to fascism, and rather believed individuals were unique and should never try to belong to an indiscriminate crowd.

You can find a long line of artists — primarily with anarchist-like leanings — who were heavily criticized by Marxists and revolutionaries for not appealing to the masses and trying to sound unique and avant-garde instead. Karlheinz Stockhausen is perhaps one of them, and some of his most famous pupils include Can’s Holger Czukay and Irmin Schmidt. They cannot reasonably qualified as right-wing, after all one of the band members suggested the group’s name CAN was an acronym for “Communism, Anarchism, Nihilism”.

Some of my father’s takes include:

  • Democracy is still too authoritarian — it is a dictatorship of the majority over minorities
  • Money and art should not mix — it promotes cheap tricks and corrupts artists
  • Popular music and mass produced art encourages herd mentality and opens the door to authoritarianism

I am still not sure of what his politics where like — after all, he never cared enough to vote. He lived 35 years under Tito’s dictatorship, and believed french politics were full of dishonest liars which did not deserve his attention.

Successful people do not “deserve” what they have

Perhaps what is most crucial in my drift away from French Social-Democracy is the idea that Meritocracy does not exist — and that trying to will it into existence is not useful. Many french social democrats I know tend to oppose markets and liberalism precisely because they think they are not meritocratic.

But these same ideas lead them to borderline reactionary proposals when discussing what welfare recipients “deserve” or “do not deserve”. I believe this is a fundamentally bankrupt framework, and we should not care whether someone is “deserving” — this is arbitrary and mostly does not lead to good outcomes.

Overall I dislike Social-Democracy because it feels too rules-based, too rigid, lacks flexibility and does nothing to adapt to change.

Perhaps one of the most powerful quotes against meritocracy I’ve witnessed was Milton Friedman’s

“Deserves” is an impossible thing to decide.

No one deserves anything. Thank god no one gets what they deserve.

As with many of Friedman’s quotes, you can twist this in any way you like. Was this a conservative rambling against welfare ? Probably not, Friedman notably believed in universal income and stated several times poverty should be eradicated.

I just think it is simply him implying the so-called “Friedman Doctrine” yields more progressive outcomes than arbitrary allocation of resources — which many on the left struggle to admit.

Property rights are not absolute

I very much subscribe to the belief property is the result of a collective decision making process. We agree to give exclusive use of a good or resource because we — as a collective — believe that this allocation will result in outcomes which are better for everyone.

Obviously we tend to secure these principles a bit more, because they serve as a fundamental basis of an economic system, and having them change every 5 years is a recipe for disaster.

I am of the belief it is very hard to find a system which beats welfare capitalism, but I remain open to the idea that the current system is not the most optimal way of allocating earth’s resources. A book that reflects my position is the recent publication called “Radical Markets: Uprooting Capitalism and Democracy for a Just Society”. Most people on the left would be opposed to the book’s ideas, but I do believe they’re wrong in this regard. I am not certain the book’s ideas are correct — in fact there are strong arguments against it.

Change is neither dangerous nor unreasonable

As someone who so far voted for centrist candidates, the arguably correct joke about centrists is that they just refuse anything bold and only want tiny adjustments. I do not believe this to be my ideology — in fact I believe the opposite.

I truly believe change is good, and I despise conservatism and inertia so much that I see the world as an always-turning wheel: nothing is here to stay, everything changes, no-one is irreplaceable, so-on so forth.

What is striking in the french political climate is that both the right and the left have — in some sense — profoundly conservative messaging.

The right believes we should return to more traditional values and family-centric companies and economics. They long for the time France was a powerful colonial power. Some want to reinstate the gold standard.

The left believes we should return to post-war welfare and stay forever in this frozen period in time when price controls and economic planning were common sense. They think companies going bankrupt and people losing their job is a proof the system has failed.

All of this is profoundly reactionary to me.

The only candidate who dared to say some things needed to change was the centrist one — although he since then made so many wrong turns that I gave up on him a few years ago.

Last words

There are many things I disagree with — this isn’t meant to be exhaustive, but I hope this makes it easier to guess what my opinion will be on a given issue.

Many would think someone who appreciates policy wonkism and liberalism is very confident about socialism being wrong, property rights being absolute, meritocracy being good or ordinary people being stupid. This isn’t quite me.

Perhaps the big flaw of this initial article is that you now might ask — but what do they believe now ? I will end up answering this question in the next articles. Topics I want to cover for now are:

  • What I believe in — to mirror this post
  • My past and more emotional aspects — what made me feel I was wrong on an emotional level
  • Why I stopped believing markets are inefficient — accepting randomness and refusing determinism

Hope this made sense and wasn’t too boring, if you have any suggestion for upcoming blogposts don’t hesitate to send a DM @arno_shae on twitter or on mastodon.

Alright, this is a bit of an unusual situation, as I rarely blog about very down-to-earth everyday technical tweaks, but my fonts looked so ugly on Windows 11 that I had to look into it.

This post is essentially mirroring this StackOverflow post.

Step 1: fix your computer

The first step is to change the way the entire OS renders fonts.

Quickest way is to just push the keys Win+R

(I made this using this very nice website by the way), you should see a small window pop-up on the bottom-left of your screen

You should type exactly what’s in my prompt: sysdm.cpl

This will open a window, click on advanced

Then (after clicking on the Performance button) you just tick a few boxes, most importantly the “Smooth edges” option.

Last step: fixing your Browser

If you use Google Chrome (or Chromium) like me, you might’ve noticed the fonts looking like shit. That is not you hallucinating (at least not if it’s still close to 2023) !

Chrome has as set of experimental features it can toggle on or off, which you can find at chrome://flags/ if you’re using chrome.

This is what it looks like:

The only one you’ve got to disable is the “Accelerated 2D Canvas” which you see on top here. You might need to Ctrl+F to find it on the page.

I recently came across the problem of computing posteriors and priors according to basic sum-product rules while having uncertainty on them. Suppose you have four probabilities \(p,q,r,k\) and all of them have some level of uncertainty that's framed as some sort of \(\text{LogitNormal}(\mu,\sigma)\).

As each of those variables are outputs of different regressions, it seems only fair to give them each parameters \((\mu_s, \sigma_s)\) with \( s\in\{p,q,r,k\}\).

Another twist is that I am given an Odds-Ratio as a way to estimate the uncertainty of \(p,q\) through \(p/q\).

In my case I found it simpler to express all of them as gaussians \(Z_s\) compressed through a logit:

$$ p/q = \dfrac{\text{logit}(Z_p)}{\text{logit}(Z_q)} = \dfrac{1+\exp(-Z_q)}{1+\exp(-Z_p)} = \exp\left(Z_\text{OR}\right) $$

I also know that \(k = rq + (1-r)p \) because \(p,q\) are conditional distributions.

This seems very abstract, but just know I have specific values for some of these quantities: I know that \(\mu_\text{OR} = 5.5\) and I have values for the confidence interval \(\text{OR}\in (4.1, 7.28)\). Let's estimate the mean \(\mu_\star\) and variance \(\sigma_\star\) of the variable \(Z_\text{OR}\).

From Normal CI's to LogNormal CI's

As it is laid out in this stack exchange post, if your variable \(X\sim\text{Normal}(\mu, \sigma)\) and \(Y = \exp X\), a \(95\%\) CI for \(Y\) is obtained by solving

$$ \mathbb{P}(a\leq Y \leq b) \triangleq \Phi\left(\dfrac{\log(b)-\mu}{\sigma}\right) – \Phi\left(\dfrac{\log(a)-\mu}{\sigma}\right) = 0.95 $$

With the additional knowledge that the normal CI's are attained at \(2\Phi(1.96) – 1\) I am left solving the equations

$$ 4.1 = \exp( \mu\,-\,1.96\sigma)\qquad\text{and}\qquad 7.28 = \exp( \mu + 1.96\sigma)$$

Hence $$\mu_\star = \dfrac{\log(a) + \log(b)}{2}\qquad\text{and}\qquad\sigma_\star = \dfrac{\log(b) – \log(a)}{2\cdot 1.96}$$

Which yields \(\mu_\star = 0.74269 \) and \(\sigma_\star = 0.0609\)

Getting the distribution of every variable

Now we can guess the distribution of \(p,q\) because

$$ 1+\exp(-Z_q) = \exp\left(Z_\text{OR}\right) + \exp\left(Z_\text{OR}\right)\exp(-Z_p) $$

So I'm back at estimating the distribution of sums/multiplications of a bunch of \(\text{LogNormal}\) RVs. According to the wikipedia page

Sum of Log-Normals

If \(X,Y\) are log-normal then a reasonable approximation of \(S \triangleq X+Y\) is given by a log-normal with parameters

$$ \sigma_S^2 = \dfrac{\exp\left(2\mu_X + \sigma_X^2/2\right)(\exp(\sigma_X^2)-1) + \exp\left(2\mu_Y + \sigma_Y^2/2\right)(\exp(\sigma_Y^2)-1) + 1}{\left[\exp\left(2\mu_X + \sigma_X^2/2\right) + \exp\left(2\mu_Y + \sigma_Y^2/2\right)\right]^2} $$

and \(\mu_S = \log\left[\exp\left(2\mu_X + \sigma_X^2/2\right) + \exp\left(2\mu_Y + \sigma_Y^2/2\right)\right] – \sigma_S^2/2\)

Product of Log-Normals

When you have to compute \( P = XY\) it's much easier, it's just a \(\text{LogNormal}\left(\mu_X + \mu_Y, \sigma_X^2 + \sigma_Y^2\right)\)

Wrapping everything up

It is conceptually easy to solve for our parameters, but doing the computations requires layers upon layers of tedious arithmetics.

Thankfully I have implemented this ugly beast in python. All it required was a symbolic expressions simplifier and some well-executed implementation of a nonlinear optimization algorithm called BFGS.

Here's a piece of the code:

    mu_q, mu_outcome, mu_condition, mu_or = sp.symbols(
        'mu_q mu_outcome mu_condition mu_or'
    sigma_q, sigma_outcome, sigma_condition, sigma_or = sp.symbols(
        'sigma_q sigma_outcome sigma_condition sigma_or', positive=True
    params_q = (mu_q, sigma_q)
    params_outcome = (mu_outcome, sigma_outcome)
    params_condition = (mu_condition, sigma_condition)
    params_or = (mu_or, sigma_or)

    mu_lhs, sigma_lhs = sum_lognormals(
        params_outcome, prod_lognormals(
            params_q, params_condition,
    mu_rhs, sigma_rhs = sum_lognormals(
            params_q, params_or, params_condition,

    return sp.simplify(mu_lhs - mu_rhs), sp.simplify(sigma_lhs - sigma_rhs)

The results !

If we truncate to 4 decimal places, our trusty algorithm finds that if

  • The condition \( k = p(1-r) + rq \) holds,
  • The condition \( p = \text{OR}\cdot q\) holds for some lognormal odds-ratio \(\text{OR}\)
  • Each \(k,p,q,r\) are lognormals

Then the values

  • \( \text{OR}\in (4.1, 7.28)\)
  • \(k\in (0.1, 0.2) \)
  • \(r\in (0.004, 0.013)\)

imply that

  • \(\mu_k = -4.2585,\quad\sigma_k = 0.21238 \)
  • \(\mu_r = -4.9321,\quad \sigma_r = 0.2769\)
  • \(\mu_q = -4.3583,\quad \sigma_q = 0.4212\)
  • finally \(\mu_p = \mu_q + \mu_\star,\quad \sigma_p = \sqrt{\sigma_q + \sigma_\star}\)

One last twist: let's not forget we didn't really have the expression \(\exp(Z_p)/\exp(Z_q)\), but rather the expression \(\dfrac{1+\exp(-Z_p)}{1+\exp(-Z_q)}\)

This is easily remedied according the so-called Three-parameter log-normal distribution reparametrization :

  • \( \mu_p \gets (-\mu_p\;-\;1)\;,\quad \sigma_p \gets \sigma_p\)
  • \( \mu_q \gets (-\mu_q\;-\;1)\;,\quad \sigma_q \gets \sigma_q\)

Let's summarize

Variable name Log-Normal mean \(\mu\) Log-Normal std \(\sigma\)
\(r\in (0.004, 0.013) \) \(\mu_r = -4.9321\) \(\sigma_r = 0.2769\)
\(k \in (0.01, 0.02) \) \(\mu_k = -4.2585\) \(\sigma_k = 0.21238\)
\(q = ?\) \(\mu_q = 3.3583\) \(\sigma_q = 0.4212\)
\(p = \text{OR}\cdot q\) \(\mu_p = 2.61561\) \(\sigma_p = 0.42558\)

As some of you may know, I'm autistic. Nothing major, just a few things I do differently on a daily basis, and a few abilities I have others don't and vice-versa.

My therapist sent me this Nature Comms article about gender identity and autism two months ago, and I was mildly annoyed that every quantity was reported as an Odds-Ratio and not just a probability \(p\in (0,1)\).

Some results are harder to parse this way, such as:

Transgender and gender-diverse individuals had higher rates of autism diagnosis compared to cisgender males (OR = 4.21, 95%CI = 3.85–4.60, p value \( < 2 × 10^{−16}\), cisgender females (OR = 6.80, 95%CI = 6.22–7.42, p value\( < 2 × 10^{−16}\), and cisgender individuals altogether (i.e., cisgender males and cisgender females combined) (OR = 5.53, 95%CI = 5.06–6.04, p value\( < 2 × 10^{−16}\)

So how do we translate such a thing as \(\text{OR} = 4.21\) ? Let's find out.

Crunching the numbers

To interpret this statement

Transgender and gender-diverse individuals had higher rates of autism diagnosis compared to cisgender males (OR = 4.21, 95%CI = 3.85–4.60, p value \( < 2 × 10^{−16}\), cisgender females (OR = 6.80, 95%CI = 6.22–7.42, p value\( < 2 × 10^{−16}\), and cisgender individuals altogether (i.e., cisgender males and cisgender females combined) (OR = 5.53, 95%CI = 5.06–6.04, p value\( < 2 × 10^{−16}\)

we should look at the methodology described in the second figure's legend: we should interpret

$$\text{OR} = p/q = \dfrac{\mathbb{P}(\text{Autism}\mid \text{GD}) }{\mathbb{P}(\text{Autism}\mid \lnot\text{GD})} $$

where \(\text{GD}\) corresponds to being gender-diverse or not. Here \(\lnot \text{GD}\) should be read as “not \(\text{GD}\)”, it is the logical negation: it means you're cisgender in short.

Further, in the introduction, they recall that

Approximately 1–2% of the general population is estimated to be autistic based on large-scale prevalence and surveillance studies

As well as

Currently, 0.4–1.3% of the general population is estimated to be transgender and gender-diverse, although the numbers vary considerably based on how the terms are defined

which boils down to \(\mathbb{P}(\text{Autism}) \in (0.01, 0.02)\) and \(\mathbb{P}(\text{GD}) \in (0.004, 0.013)\). We'll denote these \(p_\text{A}\) and \(p_\text{GD}\) for short.

We obviously have that \(p_\text{GD}p + (1-p_\text{GD})q = p_\text{A}\).

That yields us (if we recall \( p = \text{OR}\cdot q \) ) the values

$$ q \triangleq \mathbb{P}(\text{Autism}\mid \lnot\text{GD}) = \dfrac{p_\text{A}}{1+p_\text{GD}(\text{OR}-1)} \approx \dfrac{0.015}{0.0085\cdot (4.2-1)} \approx 0.0146 $$ If we just take the confidence intervals' midpoints.

This means, not accounting for uncertainty, that a cisgender person has a \(1.45\%\) chance of being autistic, while a gender-diverse person has a much higher (4.2 times higher) chance at \(6.13\%\) .

Bayes' Rule and the egg question

Am I transgender or not ?

Is a question I ask myself often, and others too. But right now, my question is a tiny bit more specific. It is rather:

Am I transgender, given that I already know I'm autistic ?

Answering this question boils down to estimation of \( p_\text{egg} \triangleq\mathbb{P}(\text{GD} \mid \text{Autism}) \).

This is where our dear friend reverend Thomas Bayes can help us !

According to his findings, this is simply

$$p_\text{egg} = \frac{ \mathbb{P}(\text{Autism}\mid \text{GD}) \mathbb{P}(\text{GD})}{\mathbb{P}(\text{Autism})} = p\cdot p_\text{GD} / p_\text{A} \approx 0.03479 $$

Turns out, as an autistic person, I have a \(3.48\%\) chance of being transgender !

This is much higher than the upper bound of \(1.3\%\) for the average joe.

Confidence intervals

This is just to have a point prediction, but how do we compute the confidence intervals ?

Turns out it's not that easy, and this is the main focus of an entire other blogpost.

After one night of coding and pretty intricate hacks, I am proud to report that I can estimate any of the paper's probabilities.

As an example, here are a few entries:

Control group Dataset \(\mu\) \(\sigma\) Probability Type \(95\%\) Confidence Interval
cisgender-individuals-altogether MU \(-2.451876\) \(0.522116\) \(0.098709\) \(\mathbb{P}(\text{Autism}\mid\text{GD})\) \(0.030954, 0.239661\)
cisgender-individuals-altogether MU \(-4.398846\) \(0.485557\) \(0.0138293\) \(\mathbb{P}(\text{Autism}\mid\lnot\text{GD})\) \(0.004745, 0.031836\)

Computing all the paper's probabilities

I am a man (woman?) of my word, so here is the entire paper's dataset converted to 0-100 probabilities.

But let's visualize some neat things, shall we ?

Here are the exact results from the paper, reframed in terms of probabilities:

But wait ! I can do more : here is the entire distribution of possible probability values for \(\mathbb{P}(\text{Autism}\mid\text{GD})\)

As you can see, that \(6.13\%\) estimate is pretty conservative ! It can go as high as \(10\% +\) with high likelihood.

Back to the “Egg question”

What about the probabilities of being trans assuming I'm autistic ?

Here they are:

We can see I have a much higher chance of being transgender, but nothing north of the range \(5-10\%\).

Voilà, that concludes our little data escapade :)

This post is about a very complex way of justifying a neat notation: in some texts, the authors write \(\bot\) for the statement that's always false, and \(\top\) for the statement that's always true. Here we'll write \(p \equiv q \) if both \(p\implies q\) and \(q\implies p\) hold. The logical or is denoted \(\lor\) and the and \(\land\) when there's no risk of confusion.


A partially-ordered set (or poset) is a set \(L\) over which a certain binary relation \(\leq\) is defined. This relation must satisfy three conditions: let \(a,b,c\) be three elements of \(L\)

  • reflexive: we always have \(a \leq a\)
  • antisymmetric: if both \(a\leq b\) and \(b\leq a\) hold, then \(a = b\)
  • transitive: if \(a\leq b\) and \(b\leq c\) then \(a\leq c\)

Logic as a poset

For two statements \(p,q\), define \((p\leq q )\triangleq (p \implies q)\). This makes sense because the \(\leq \) symbol can be read as “comes before” or “is a predecessor of”. Indeed an implication involves a statement that holds “before” and a conclusion that comes “after”. Then we have that \( p \leq p \) is defined as \( p \implies p\) which is logically equivalent to \(\lnot p \lor p\) which always holds. By definition we have that \( p \leq q\) and \(q \leq p\) means implication in both directions, that is \(p \equiv q\), that's antisymmetry. Finally, if one has \(p \leq q\) and \(q\leq r\) then we have \(p \implies q \implies r\) which gives transitivity.


If one equips a poset with two specific binary operators, called join (\(\lor\)) and meet (\(\land\)), we obtain a lattice. The join \(a \lor b\) of two elements \(a,b\) is the least upper bound of the set \({a,b}\). The meet \(a\land b\), on the other end, is the greatest lower bound of the same set.

Logic as a lattice

If we define the logical or (which we will denote “\(\mathrm{or}\)” to avoid confusion) as our join as the logical and (which we here denote “\(\mathrm{and}\)“) as our meet, we must check the basic properties of the two operators.

The join has to first satisfy both \(p \leq p\lor q\) and \(q \leq p\lor q\). This holds because we always have \(p\implies p\lor q\) and by symmetry the other inequality holds. We also have to show our candidate join is a least upper bound, that is any upper bound \(u\) of \({p,q}\) satisfies \(p\lor q \leq u\). If \(u\) is an upper bound in the sense of \(\leq\), then $$p\;\mathrm{or}\; q \implies u \equiv \lnot (p\;\mathrm{or}\; q) \;\mathrm{or}\; u \equiv (\lnot p \;\mathrm{or}\; u) \;\mathrm{and}\; (\lnot q \;\mathrm{or}\; u) $$

this last expression exactly means that both \(p\leq u\) and \(q\leq u\) hold. Similar computations yields that the logical and is indeed a valid meet.

Bounded lattices

A bounded lattice is a lattice for which there is a bottom element \(\bot\) such that \(\bot \leq a\) for any \(a\), and a top element which is greater than any other element.

Logic as a bounded lattice

Here's our final question: can the two symbols \(\bot\) and \(\top\) have both a “lattice interpretation” and a “logical interpretation” ?

For any proposition \(p\), we have that the false statement \(\bot\) satisfies \(\bot \leq p\) because

$$ \bot \implies p \equiv \lnot\bot \;\mathrm{or}\; p \equiv \top \;\mathrm{or}\; p \equiv \top$$

since the logical or of a true statement and any statement is always true. The derivation for the top element is similar, as \(p\leq \top\) means \(\lnot p\;\mathrm{or}\;\top\) which always holds and is equivalent to \(\top\).

We thus showed that logic can be thought as a bounded lattice, which justifies naming \(\bot\) the “always false” statement and \(\top\) the always true statement.

In this post, we'll prove some properties of the set \(\mathrm{Mat}_\mathbb{R}(m,m) = \mathbb{R}^{m\times m}\) the space of square matrices. As all norms are equivalent in finite dimensions, we equip the space with the frobenius norm:

$$ \lVert \mathrm{A} \rVert_F^2 \triangleq \sum_{i=1}^m\sum_{j=1}^m a_{ij}^2 $$ where \(\mathrm{A} =[a_{ij}]_{i,j}\) are the entries of the matrix.

Basic properties

It is trivial to prove that a sequence of matrices \((\mathrm{A}^{(n)})_n\) converges to a limit \(\mathrm{A}\) if the entry sequences \((a_{ij}^{(n)})_n\) converge to \(a_{ij}\) for all \(i,j\). We also obviously have that the sum of convergent matrix sequences converge to the sum of limits. Another easy to prove fact is that if two sequences of matrices \((\mathrm{A}^{(n)})_n\) and \((\mathrm{B}^{(n)})_n\) converge each to limits \(\mathrm{A}, \mathrm{B}\), then the product sequence converges to the product of limits \(\mathrm{AB}\). This can be proven by showing the \(i,j\)-th entry of the product is

$$ \left[\mathrm{A}^{(n)}\mathrm{B}^{(n)}\right]_{ij} = \sum_{k=1}^{m} a^{(n)}_{ik}b^{(n)}_{kj} $$ which converges as a finite sum of products of convergent sequences \(a^{(n)}_{ik} \to a_{ik}\) and \(b^{(n)}_{kj} \to b_{kj}\).

Using this fact, we can prove a useful lemma: denote \(S_n \triangleq \sum_{k=0}^n \mathrm{A}^k\) the partial sum of matrix powers, we get that

$$ (\mathrm{I} – \mathrm{A})S_n = \mathrm{I} – \mathrm{A}^{n+1}$$

therefore \(S_n\) converges if and only if \(\mathrm{I} – \mathrm{A}^{n+1}\) does, which converges if and only if \(\mathrm{A}^{n+1}\) does. This geometric sequence converges whenever \(\lVert \mathrm{A} \rVert_F < 1\). Thus we have that \(\lVert \mathrm{A} \rVert_F < 1 \implies (\mathrm{I} – \mathrm{A})^{-1}\) exists and

$$ (\mathrm{I} – \mathrm{A})^{-1} = \sum_{k=0}^\infty \mathrm{A}^k$$

Another easy to prove property of the matrix norm is that it is submultiplicative: \(\lVert \mathrm{AB}\rVert \leq \lVert \mathrm{A}\rVert \lVert \mathrm{B}\rVert \).

Invertible linear maps form an open set

We can prove this for any matrix norm, so we'll pick the one defined above. Suppose the matrix \(\mathrm{A}\) is invertible. We have to show that for some \(\delta > 0\), and any perturbation \(\mathrm{H}\) with small amplitude (\(\lVert \mathrm{H} \rVert < \delta$), the perturbed matrix $\mathrm{A} + \mathrm{H}\) is still invertible.

The first step is to use our lemma above, noticing that

$$ \left(\mathrm{A} + \mathrm{H}\right)^{-1} = \left(\mathrm{I} + \mathrm{A}^{-1}\mathrm{H}\right)^{-1}\mathrm{A}^{-1}$$

therefore, we can always pick \(\delta = 1 / {2\lVert \mathrm{A}^{-1} \rVert}\) which yields that \(\mathrm{A}^{-1}\mathrm{H}\) has a norm less than \(\frac12\) and thus the inverse exists. We just proved that the set of invertible matrices is open

Invertible linear maps form a dense set

A fairly basic result is that any matrix can be reduced to a RREF (Row Reduced Echelon Form) matrix which are defined by four properties:

  • In every row, the first non-zero entry is \(1\) (called pivotal $1$)
  • The pivotal \(1\) of a lower row is always to the right of the pivotal \(1\) of a higher row
  • Every column which contains a pivotal \(1\) has only zeros as other entries
  • Rows consisting only of $0$'s are on the bottom

Further, this RREF is obtain through a product of elementary matrices \(\mathrm{E}_k\mathrm{E}_{k-1}\dots\mathrm{E}_2\mathrm{E}_1\mathrm{A}\) which are invertible. Thus, singular matrix \(\mathrm{M}\) can be written as \(\mathrm{E}_k\mathrm{E}_{k-1}\dots\mathrm{E}_2\mathrm{E}_1\mathrm{A}\) with \(\mathrm{A}\) in RREF.

To construct a sequence of invertible matrices that tend to \(\mathrm{M}\) in the limit, observe that the only non invertible in the product is \(\mathrm{A}\), because \(p > 0\) rows have zero in their diagonal entry. If we replace those zeroes by \(1/n\) we obtain a sequence \(\mathrm{A}_n\) of matrices which are invertible and

$$\lim_{n\to\infty}\mathrm{E}_k\mathrm{E}_{k-1}\dots\mathrm{E}_2\mathrm{E}_1\mathrm{A}_n = \mathrm{E}_k\mathrm{E}_{k-1}\dots\mathrm{E}_2\mathrm{E}_1\lim_{n\to\infty}\mathrm{A}_n = \mathrm{M}$$

which proves that invertible matrices are dense.

This post is about a multivariate version of the usual Laplace approximation of a partition function. Suppose one has a parametric distribution \(p_\theta\) given by

$$ p_\theta(\mathrm{x}) = \dfrac{1}{Z(\theta)}\exp\left(-T(\mathrm{x}) + \theta^\mathsf{T}\mathrm{x}\right) $$

and one wishes to compute the partition function $$Z(\theta) \triangleq \int_\mathcal{X}\exp\left(-T(\mathrm{x}) + \theta^\mathsf{T}\mathrm{x}\right)\,\mathrm{dx}$$

Then, given a saddle point \(\mathrm{x}^\star\) for which \(\nabla_\mathrm{x}T(\mathrm{x}^\star) = 0\), the quadratic approximation of \(T(\cdot)\) reads

$$T(\mathrm{x}) \approx T(\mathrm{x}^\star) + (\mathrm{x} – \mathrm{x}^\star)^\mathsf{T}\nabla_\mathrm{x}^2T(\mathrm{x}^\star)(\mathrm{x} – \mathrm{x}^\star) $$ where we will use the notation \(\mathrm{H}_\star \triangleq \nabla_\mathrm{x}^2T(\mathrm{x}^\star)\) for the hessian to simplify notation.

To be added.

Suppose a parametric function \(f_\theta \colon A \to \mathbb{R}\) is convex in both \(\theta\) and \(x\in A\). Then one can prove the function \(g(x) = \min_\theta f_\theta(x)\) is convex in \(x\) (where \(\theta\) is constrained to a compact set $\Theta$). Indeed, take any two \(x_1, x_2 \in A\) and \(t\in (0,1)\), we have

\begin{align} f_\theta(tx_1 + (1-t)x_2) &\leq tf_\theta(x_1) + (1-t)f_\theta(x_2) \\ & \leq t\min_\theta f_\theta(x_1) + (1-t)f_\theta(x_2) \\ & \leq t\min_\theta f_\theta(x_1) + (1-t)\min_\theta f_\theta(x_2) \\ \end{align}

simply because the inequality holds for any \(\theta, x_1, x_2\) and thus holds for the parameters which minimize the function. Finally as \(\min_\theta f_\theta(x) \leq f_\theta(x)\) for every \(x\), we get the final result. More formally, this argument can be extended to the infimum \(\inf_\theta f_\theta(x)\) using the fact the infimum is the greatest lower bound (and must thus be greater than the LHS).

A similar derivation gives that \(h(x) = \max_\theta f_\theta(x)\) (and the \(\inf\) equivalent) is convex as well.

Enter your email to subscribe to updates.