a3nm's blog

Estimating carbon footprints: what is 1 ton of CO2e?

I just posted an article on the blog of the TCS4F initiative about how to understand the carbon footprint of various kinds of activities. You can read it here: Estimating carbon footprints: what is 1 ton of CO2e?.

Writing this kind of posts is a bit hard for me, because I'm certainly not an expert on climate change, so I don't feel very legitimate and hope that what I am saying there is realistic. Still, I hope that doing my own statistics is better than being completely in the dark. Anyway, reading about all this was the occasion to discover many surprising things about my own footprint, which can help me focus my efforts on the right areas. Personally here were the surprisingly high numbers, based on figures from the post:

  • The plane trips I take (most of which are for work) probably have a carbon footprint which is greater that all my other sources of emissions combined: several tons of CO2e per year, if not dozens of tons1. So this was the main takeaway: the number one area on which I should work is on travelling less to distant places : avoiding a transcontinental plane trip saves tons of CO2e emissions.
  • The carbon footprint of heating my flat (collective heating) is already about 1.2 tons of CO2e per year. Probably a bit less overall than the footprint of the food I eat, but certainly not negligible, even though I don't think much about it and have little control over it...
  • In a sense, the few Bitcoin transactions I have done last year may have ended up corresponding to several hundred kilos of CO2e on their own. I would never have guessed.
  • Not using a car probably allows me to save around one ton of CO2e per year in emissions from fuel for my commute, and several additional tons in terms of not having to produce the car.

And here were the surprisingly low numbers:

  • Avoiding meat only saves around 600 kg CO2e per year. This has a huge impact if everyone does it, but at my individual level it is rather small compared to the impact of one single long plane trip. Also, the most important food to avoid is unquestionably beef, and avoiding non-local food (e.g., bananas) does not seem to be worth it (see this study). So I'm still onboard with reducing my meat consumption, but transgressing every now and then isn't probably so much of a deal.
  • The carbon footprint in France is really low, thanks to the predominant use of nuclear power (not to mention other environmental costs, of course), so my total consumption of around 1800 kWh per year only represents around 18 kg CO2e.
  • Likewise, I was surprised to see that the estimated CO2e footprint of producing a mobile phone is only around 80 kg CO2e, and producing a T-shirt is only around 7 kg CO2e. Again, this says nothing of other environmental costs, but it is lower than I would have expected. I'm still not a fan of buying new hardware just for the sake of it, but apparently it's not such a huge deal in terms of greenhouse gas emissions.

I encourage you to read the post, which can hopefully give you a better idea of how your activities are producing CO2e, and what's the best way to help by reducing your emissions. And of course I'd be delighted to hear back if you have questions or comments about the results.


  1. I even went as far as trying to list all flights I took in my life, which wasn't so hard (took about an hour, if I didn't forget any). There are 125 flights, amounting to a total of 408 thousand kilometers, i.e., going over the world 10 times in total, or a bit over a one-way trip to the moon (!). This probably represents around a hundred tons of CO2e which is commensurate to what a sustainable lifetime carbon budget would have been for my entire lifetime (and with all emission causes combined)... One surprising point was that over half of this total is consumed by the 20 longest flights, so the biggest culprit is really long transcontinental trips. 

  2. In fact, I am a proud subscriber of Enercoop (in French), which means that in a sense I am already offsetting the footprint of my electricity consumption. 

No free view? No review!

I have blogged last month about my commitments for open access. Now, with Antonin Delpeuch and the CAPSH association, we are taking this to the next level and launching an initiative: No free view? No review!.

The idea is to give researchers a simple way to publicly say that they no longer want to review for closed-access conferences or journals, or prefer not to do it (we have kept the wording pretty open too). My hope is that will help show that many researchers support open-access, and get them to change their reviewing habits, which are easier to change than publishing habits.

So if you are interested, you can sign the pledge, advertise it on your webpage to stop closed-access reviewing requests, and spread the word around you!

Oh, and for another take on open-access and academia's flaws, last weekend I was thinking about that question: what if academia were an open-source project? And inspired by this, I wrote some contributing guidelines for computer science research. I hope this is fun to read — it certainly was fun to write. :)

Open access: my policy and my hopes

I care a lot about academic research papers being publicly available for free on the Internet, aka open access. In addition to describing this in my list of problems with academia, I recently wrote two things about open access:

  • A personal open access policy clarifying why I believe in open access, and which steps I am taking to promote it, in particular by refusing to review for closed-access conferences and journals. I have started to follow this policy since I was recruited as a tenured academic in August 2016, after having put up with the broken academic publishing system for several years during my PhD. Yet, it took me three and a half years of hesitation and discussion with my colleagues before I was comfortable enough to take a public stand on this. I hope posting this policy online will help colleagues understand my stance (in particular understand why I won't help them with reviewing for closed-access conferences), and that it will inspire other acts of disobedience against the statu quo.
  • A blogpost on the Database Theory Blog, co-written with Pierre Senellart. This is open access advocacy of a different kind, where we explain what would be possible in the saner world where the corpus of scientific papers were downloadable just like the Wikimedia datasets, or this impromptu mirror of LIPIcs papers that we prepared just because we could. Sadly, as we explain, this valuable resource cannot exist yet because it is currently hidden behind paywalls for historical reasons. It is our responsibility as researchers to move away from this model to unlock everything that should be possible with our research.

How to estimate your conference's carbon footprint

I have written an attempt at a guide about how to measure the carbon footprint of a scientific conference (or other event). This is largely inspired by the computation of the carbon footprint of the AGU 2019 Fall Meeting, and also by our experience computing the carbon footprint of SWERC 2019–2020.

This blogpost is published on the blog of the TCS4F initiative, an attempt by theoretical computer scientists to reduce the carbon footprint of our activities and help mitigate the climate crisis. I'm involved in the initiative (mostly as an author of blog posts, for now) and I have signed their manifesto: I encourage you to do the same if you agree with taking action on this important issue!

In any case, head here to read the blog post!

Songflower -- reflowing sheet music to a different paper size

The problem

I wanted to have a look at Melodia, a public-domain textbook to learn sight-reading. I thought it would be nice to read Melodia on an ebook reader, but the available PDF is a scan of a paper copy in 7 by 10 inches, and my Kobo Glo screen is only around 3.5 by 4.8 inches:

sizes of paper and display

The obvious solution is to scale it down, but it would be too small to read. So wouldn't it be better if we could split each page into parts of the right size?

splitting music into blocks naively

Of course, it's not that simple. When reading music you want to follow the lines from left to right, then from top to bottom. It's exactly like text, where the right way to reflow a page is:

splitting text into blocks correctly

One way to do this would be to do optical music recognition to transcribe the whole partition, and then re-render it to the right format. But that's not so fun, and I don't know if there are good implementations for this task which I can easily install (i.e., not expensive and Windows-only programs). Instead, let's go for a more ad-hoc but much simpler approach.

To split music, you need to be able to recognize the separate lines (i.e., staves, or systems). Then you need to find a way to cut them: typically at a measure bar line, to avoid splitting inside a bar. More subtly, for the result to look good, you want each split line in the output page to have exactly the same length, even if the bars on the input page did not, otherwise the result will look jagged. The naive solution would be to stretch each line, but it will look awful and distorted. Instead, you want to do something like seam carving: scale the "uninteresting parts" (e.g., empty space on the staff) while leaving the "important stuff" (notes, alterations, etc.) untouched. See:

3 ways to split sheet music

What's more, because scaling is undesirable, you need to decide where to cut among all the bar lines. The goal is to avoid having lines that are too small and need to be resized by a large factor (leaving too much empty space). The naive way is to fit as many bars as possible on each line, but this can leave you with very short lines that are hard to format properly. This is a variant of the word wrapping problem

word wrapping problem

The solution

So I wrote some code to take care of these tasks for me. I called it songflower, as in something that (re)flows songs. The code is here. I don't expect it to be usable without significant tweaking, but there is a README and some documentation that allow to play with it and tweak the parameters.

So what does it do exactly? Glad you asked. :) Here are the steps:

  • Burst the input PDF into a collection of bitmap images, one for each page
  • Split each page into one image for each line on the page (i.e., staves, or systems)
  • Split each line into a chunk, i.e., a sequence of consecutive bars having exactly the prescribed width. This involves:
    • Finding the bar lines
    • Splitting bars into chunks intelligently
    • Stretching the chunks to have exactly the right width
  • Combine the chunks into output pages (laying them out nicely on the page) and combine those into the output PDF.

The first step is easy with convert from ImageMagick, although it is pretty long and for some reason and takes lots of RAM. The last step is also pretty routine: you just greedily put as many chunks on each page as you can, you need to compute the right spacing between the chunks to fill all available height in the layout, and then combining the result into a PDF is just convert again, plus pdftk. The harder steps are the two middle steps:

the two hard steps

Step 1: Splitting pages into lines

This step is comparatively easy. The program just looks for rows of the picture that consist mostly of white pixels. It then smooths this by merging together consecutive white lines that are sufficiently close (to have some resilience to noise). Last, it cuts at the largest sets of consecutive white lines, up to some threshold. This is how the program sees a page of Melodia:

illustrating line splits

The yellow lines are the lines detected as white space, and the red lines are where the program decided to cut (there were enough consecutive empty lines). The green lines were not used to cut because they yielded a line that was too short. The result here is pretty OK, except that the program was a bit too eager and cut away a tie connecting two dotted minims on the 4th staff from the bottom. Apart from small problems of this kind, the approach is pretty robust on the Melodia dataset. The main problems involve slurs, staves that are too close together on a page, and lyrics which often get removed (this is fine for my needs but it might not be for yours).

Step 2: Splitting lines into chunks

This is the trickier part. Let's focus on the first line above:

illustrating line splits (original line)

This is how the program sees it:

illustrating line splits (annotation)

OK, this requires some explaining. The program looks at columns of pixels and computes two things: the height of the column, and the weight of the column (smoothed over a small window). The height is just the maximal vertical distance between two non-white pixels on the column, and the weight is the sum of the pixel's distance to black. We are especially interested in columns that both have a minimal height and a minimal weight (excluding some outliers, with some thresholds, etc). Intuitively, these columns are points on the staff where nothing is happening. These are the purple parts on the picture above: the cyan parts have small weight but not height, the pink parts have small height but not weight, and the black parts satisfy neither criterion. (Sorry, these ugly colors are just what was the easiest to produce -- it's a debug output.)

We can see that both criteria are useful: if we look only at the height (pink and purple) then we still include columns of staff that are not empty. If we look only at weight, then we get fooled by some columns that include part of a note (bars 2 and 3). But when we look at both criteria, the purple parts are a reasonable approximation of the parts of the score where nothing is happening. (Note that floating stuff, e.g., the numbers 168 and 169, prevent parts of the staff from being detected as empty.)

Now, it's good to find where nothing is happening, but where can we find bar lines? The idea is that a bar line is a part of the staff which:

  • is of minimal height but not of minimal weight (pink)
  • is surrounded to the left and right by points of minimal height and weight (purple)
  • is narrow, i.e., not too many pixels wide
  • contains a column with high weight, i.e., significantly higher (by some factor) than the weight of columns where nothing happens.

This is a complicated collection of criteria, but I didn't have luck with more naive approaches. My initial idea was just to look for columns having a minimal height and high weight, but this is fooled by note stems, e.g., of the last crochet of bar 3. Still, this criterion is fragile, especially with minims as their notehead is sometimes not heavy enough to be detected; and you have to set some parameters correctly.

On the example above, this approach correctly detects all bars (vertical lines), except the double bar at the middle. The blue bar is where the program decided to cut the line (to fit the width I had requested), and the red lines were possible cuts that were not used. Here the choice of a cut is simple (take the closest to the middle), but in more complicated cases the program will try all possible approaches.

The last thing to do is stretching the chunks to the left and right of the blue line so that they have the exact same length -- which they currently don't (the left chunk is much longer than the right chunk). To do this, the program simply scales all purple regions. We get this (without the colors):

first split second split

It's not perfect: the first bar is a bit off, because the program did not stretch around the "168".1 Also, in a proper edition, the key would be repeated at the beginning of the second line, which my program doesn't dot, but well... Other than that, I think it's pretty seamless, and hard to notice that there was some work involved in bringing these two lines to the same width.

Of course, this example is a case where the program doesn't perform too badly. Sometimes, it has a harder time doing something reasonable:

case where the program fails

One could think of more elaborate approaches to avoid problems, e.g., trying to identify floating things like "168" or lyrics or other things, process the staff without taking them into account, and then attach them back to the right place afterwards...

Wrapping up

There are a lot of corner cases and other subtleties in the code which I didn't get into. For instance, when the program doesn't detect enough bar lines, then it will fall back on cutting on spaces detected to be empty -- this is usually less disruptive. Performance is also an issue, as the code is in Python: to speed things up is, I use GNU parallel to process multiple pages and lines simultaneously.

Anyway: applying the process to Melodia, fixing a few glitches by tweaking the parameters on some pages (and adding a title, removing junk, etc.), I got the reflowed PDF I wanted. It's not perfect at all, and some of the later pages are quite wild, but well, I'll see to it when/if I get there. :)

You should be able at least to reflow Melodia to a different format by grabbing the code and running it. Other than that, I'd be happy to know if you have any luck applying this code to other files! I'm also interested in patches or ideas if you also find it fun to hack on this.


  1. If you think the first bar of the second line is also a bit strange, look up: this specific weirdness comes from the original. :) I would say it's a success if the weirdness added by my code isn't so easy to distinguish from the weirdness that comes from the original engraving.