A long time ago… In a galaxy far, far away… I was an embedded hard- and software developer. I architected and programmed microcontroller-based PCBs using a mix of assembly and C. I used to call that “C–” as it all was so tiny and efficient. Very close to the hardware, extremely low-level. No operating system present and not a byte wasted on anything: Lots of microcontrollers at the time had only 2, 4 or 8KBytes of program memory! Ahhh the good ‘ole times where just the wrong #include could cause an “out of program space” error.
Optimizing applications for energy consumption?!?
We actually used to laugh at this development called “Real Time Java”. First of all the Java programming language was like a game of cricket; very few people really understood it plus there was nothing Real Time about it. Well, this depends on your definition of Real Time of course… I also used to work on drivers for systems measuring and managing water levels of the many waterbodies in the Netherlands where ONE HOUR was considered Real Time ;).
The biggest issue however was the much larger consumption of CPU cycles to get the job done, and a stellar amount of required dynamically allocated memory (which was scary in the world of embedded to begin with). Microcontrollers went from 2KBytes of physical memory to 64KB or even 256KB of memory, crazy!
I pretty much left the professional world of software development when they added the “++” to the “C”. Today I still program in both “C” and Python for fun (Arduino and Raspberry Pi anyone?)
In today’s world it seems no one is scared of nor cares for the resource increments mentioned above. Today you can plug multiple TBs of memory into a single server. But think about this:
If an app written in Java “eats” 16GB, it would just eat around 3GB when written in “C”.
If an app written in Python “eats” 70K CPU cycles to complete, it would just eat around 1K CPU cycles when written in “C”.
These are SHOCKING numbers!!! I found a really comprehensive writeup where many programming languages were measured on “how green” they really are. You can find the report in PDF form here: Energy Efficiency across Programming Languages. Table 4 of the PDF has this very interesting data:
Looking at the table above there are some very interesting things to note. Fun stuff like:
- “C” is around 75x more CPU efficient compared to Python;
- “C” is around 5x less memory intensive compared to Java;
- Wow. Just HOW much memory does Jruby need??
For people thinking that running containers is the “greenest thing ever”… Think again! Yes, it helps, but by changing programming languages you might save up to 80x CPU cycles and up to 20x memory footprint. If you put that in physical server number reduction… That is more than energy efficient servers AND the use of container platforms combined: You could easily get the job done with only half of your server farm, maybe even a third or a quarter!
“Being Green” in IT: Not just infra, but ALL layers should help!
I believe the above (and part one of this blog post) shows that “being green” in the world of IT requires thought on all layers:
- Use energy efficient infrastructure whenever possible (#IWork4Dell);
- By selecting storage platforms that handle compression and dedupe better you can save on infrastructure footprint;
- A hypervisor can enable the squeezing of more workloads onto physical servers, even for k8s based loads;
- Containers can strip significant Operating System overhead from workloads compared to running everything in VMs;
- By selecting a more efficient programming language for your apps you can save significantly on memory and CPU consumption.
So here I am, been a fan of the “C” programming language for around 30 years (yes, I am THAT old 😉 ), and it is still one of the most efficient languages to use today! Yay! And on the other side of the spectrum: We all know you can build virtually ANYTHING in Python in one single evening whenever you have the wealth of Python libraries, Google and chatGPT on standby :). Something for everyone I guess.