There are also several 16-bit families, but they don't seem to be worth covering in great detail: the prices of 32-bit and 8-bit chips are already pretty close, so the case for a proprietary in-between solution seems pretty weak.
Often the deep sleep power and capabilities are the crucial point to enable battery and/or solar operation, but you didn't mention anything about this.
Sure, there are specific considerations for specific uses. Most of the modern chips have robust low-power modes; for example, AVR Dx is in the 1 µA territory in deep sleep.
Similarly, for some applications, you absolutely need some minimum ADC resolution or sample rate. But I think that gets deeper into the weeds; the target audience for the article are folks who always reach for the same chip, which is the vast majority of hobbyists.
Notable omission in my opinion: The nRF line of microcontrollers is really popular for low-power wireless applications, and not very expensive or hard to use for a hobbyist. The proprietary BLE stack is avoided using NimBLE, and the offical SDK can be avoided the same way STM32's can. It belongs in somewhat of the same bracket as ESP32, but it's a competent choice for low power/simple applications even if you don't need the radio.
I've only done esp32, building a bunch of LED drivers and wiring them for physical and home-assistant controls. For a complete newbie to physical hardware, it wasn't too bad at all.
Actually never ended up doing any low level code, everything could be done with esp home and YAML config files. Really approachable for newbies.
I do not design and build microcontroller-based hardware regularly, and when I do, I have wildly varying requirements. I got tired of vendor-supplied toolchains, drivers and (worst of all) IDEs. These days my main criterium for selection, apart from reliable hardware, is "Does it work with Zephyr?".
Zephyr RTOS became my abstraction layer: I still have to wrestle with constantly breaking toolchain installs, but at least I have *one* platform to understand and build for. And it's a reasonably good one.
Over the years I also learned to stop optimizing for unit cost. You should definitely do it for production designs, but for my stuff, unit cost is irrelevant. What matters is development time, vendor support, etc. This is why most of my recent designs are based on Nordic Semiconductor chips — yes, they are not the cheapest, but they work, the docs are OK, and the company provides support.
For AVR and Cortex-M specifically - both are supported by mainline gcc and binutils, and you can use any IDE you like. Some vendors really like to pretend otherwise, but you can easily ignore basically all the tooling and most of the abstraction layers they provide.
In these cases, vendor lock-in is mostly a matter of the chips having different peripherals that work in different ways. Some of this can be abstracted away, but ultimately, something like the DMA subsystem or PWM is going to behave quite differently on STM chips than it does on NXP chips. I haven't used Zephyr, so I'm not sure how they approach that, but at a glance, it looks like it's hit and miss - "The DMA API is not a portable API and really cannot be as each DMA has unique memory requirements, peripheral interactions, and features."
Sad thing is that Microchip is deliberately trying to close down the AVR ecosystem. Improvements and support for new chips is only added to their GCC 5.4 fork, which they have added paid license requirement if you want to enable optimization. They do provide GPL source code with huge delay if you ask for it, so you can remove the license check, but that's a bit meh. Also Microchip has switched to their own closed source libc so avr-libc is only maintained by volunteers now.
Fortunately there probably is enough interest to keep avr-gcc and avr-libc in a reasonable good shape by volunteer effort. But it is really sad that Microchip decided to ruin the excellent open source community that Atmel had built.
I admit that I'm not following too closely, but I'm a bit surprised about what you're saying. Aren't the new chips the same CPU core as xmega2/3/4 devices that have been around for a long time and have robust gcc support?
They have different peripherals and so on, but that doesn't require a fork of gcc, just taking a bunch of .h files (which Microchip does release in a timely fashion and which are licensed under the Apache license).
Hmm, true, the lack of innovation there does reduce the effect. I guess at this point 8-bit controllers are pretty much "done" and it's not as likely that even new bugs are found in the compiler. I'm not sure if you can easily override the compiler mcu definitions for unsupported devices, but the commits needed aren't huge ( https://github.com/gcc-mirror/gcc/commit/919822adc923b00e995e8a148bdb8115a794b47b )
For me the biggest dismay was a (commercial) project that had been developed using the Microchip proprietary libc instead of avr-gcc, and consequently is now stuck on Microchip-rebranded gcc 5.4 in all its glory.
Yes, I know, I wrote bare metal code for Kinetis MKL{02,03,04,05,17,25} and MK02 microcontrollers, I did lots of DMA hacking on the MK20 and MK22, and I wrote bare-metal code for the MSP430 (using gcc). I also wrote drivers (https://github.com/jwr/kinetis_i2c), because the vendor code sucked so badly. But I'm tired of doing this.
It's refreshing to be able to use the same "ecosystem" (for lack of a better word) across vendors. I like the fact that I can use a Kinetis μC one day, an STM the next, and a Nordic the day after that, and most of my code can remain similar.
Yes, for stuff like platform-specific DMA work I would probably need to write code myself (although some vendors, like Nordic, do provide Zephyr drivers for everything on their chips, obviously with varying quality). But I had to do this anyway. With Zephyr, I don't expect to get an all-encompassing cross-platform abstraction, I just want the basics covered, so that I maybe don't have to write any more I2C drivers (I've written several and published three so far). And so that I can remember how to actually build a project and flash it to my board.
I used PICs for years, but the predatory pricing of the optimizing compilers was too much. I still occasionally use them for 5V-only designs, but the STM32 parts provide so much more bang for the buck. The toolchains are better, too. I just wish they would address the serious errata.
Yeah, 8-bit PIC made little sense for hobbyists the moment AVR showed up. In commercial settings, I think PICs had lower prices, but right now, they seem to be positioned as some sort of a premium chip for automotive uses? Dunno. I was never a fan.
I find myself reaching for the ESP32 family fairly often, especially for one-off projects. It's useful to have the monitoring and "remote" programming capabilities that come with WiFi. Additionally, the ESP32 family can map most peripherals to any pin, which is really handy. Although, given the complexity of the networking stack, I don't think of applications written on ESP32 as being quite as reliable as projects on e.g. an STM32. This has gotten better over time, but is still somewhat an issue.
It is very costly for battery-operated devices. But yeah, for plugged-in gear, especially home automation, seamless connectivity is a solid selling point.
Power, not really either if you use sleep correctly, plus if you are prepared to have a gateway, ESP NOW saves about 3 seconds of TCPIP IP address acquisition every powerup - you can be up and down again, having sent data, in 35ms, this saves heaps of power.
To the point I have an ESP32 temp sensor that sends a reading every ten minutes and maintains battery charge indoors with a 50mmx50mm solar cell, indefinately. Deep sleep is around 10uA.
There are also several 16-bit families, but they don't seem to be worth covering in great detail: the prices of 32-bit and 8-bit chips are already pretty close, so the case for a proprietary in-between solution seems pretty weak.
Often the deep sleep power and capabilities are the crucial point to enable battery and/or solar operation, but you didn't mention anything about this.
Sure, there are specific considerations for specific uses. Most of the modern chips have robust low-power modes; for example, AVR Dx is in the 1 µA territory in deep sleep.
Similarly, for some applications, you absolutely need some minimum ADC resolution or sample rate. But I think that gets deeper into the weeds; the target audience for the article are folks who always reach for the same chip, which is the vast majority of hobbyists.
Notable omission in my opinion: The nRF line of microcontrollers is really popular for low-power wireless applications, and not very expensive or hard to use for a hobbyist. The proprietary BLE stack is avoided using NimBLE, and the offical SDK can be avoided the same way STM32's can. It belongs in somewhat of the same bracket as ESP32, but it's a competent choice for low power/simple applications even if you don't need the radio.
I've only done esp32, building a bunch of LED drivers and wiring them for physical and home-assistant controls. For a complete newbie to physical hardware, it wasn't too bad at all.
Actually never ended up doing any low level code, everything could be done with esp home and YAML config files. Really approachable for newbies.
I think the real distinction is in the name: Are you trying to just close a control loop or do you need some non-trivial application software.
I do not design and build microcontroller-based hardware regularly, and when I do, I have wildly varying requirements. I got tired of vendor-supplied toolchains, drivers and (worst of all) IDEs. These days my main criterium for selection, apart from reliable hardware, is "Does it work with Zephyr?".
Zephyr RTOS became my abstraction layer: I still have to wrestle with constantly breaking toolchain installs, but at least I have *one* platform to understand and build for. And it's a reasonably good one.
Over the years I also learned to stop optimizing for unit cost. You should definitely do it for production designs, but for my stuff, unit cost is irrelevant. What matters is development time, vendor support, etc. This is why most of my recent designs are based on Nordic Semiconductor chips — yes, they are not the cheapest, but they work, the docs are OK, and the company provides support.
For AVR and Cortex-M specifically - both are supported by mainline gcc and binutils, and you can use any IDE you like. Some vendors really like to pretend otherwise, but you can easily ignore basically all the tooling and most of the abstraction layers they provide.
In these cases, vendor lock-in is mostly a matter of the chips having different peripherals that work in different ways. Some of this can be abstracted away, but ultimately, something like the DMA subsystem or PWM is going to behave quite differently on STM chips than it does on NXP chips. I haven't used Zephyr, so I'm not sure how they approach that, but at a glance, it looks like it's hit and miss - "The DMA API is not a portable API and really cannot be as each DMA has unique memory requirements, peripheral interactions, and features."
Sad thing is that Microchip is deliberately trying to close down the AVR ecosystem. Improvements and support for new chips is only added to their GCC 5.4 fork, which they have added paid license requirement if you want to enable optimization. They do provide GPL source code with huge delay if you ask for it, so you can remove the license check, but that's a bit meh. Also Microchip has switched to their own closed source libc so avr-libc is only maintained by volunteers now.
Fortunately there probably is enough interest to keep avr-gcc and avr-libc in a reasonable good shape by volunteer effort. But it is really sad that Microchip decided to ruin the excellent open source community that Atmel had built.
I admit that I'm not following too closely, but I'm a bit surprised about what you're saying. Aren't the new chips the same CPU core as xmega2/3/4 devices that have been around for a long time and have robust gcc support?
They have different peripherals and so on, but that doesn't require a fork of gcc, just taking a bunch of .h files (which Microchip does release in a timely fashion and which are licensed under the Apache license).
Hmm, true, the lack of innovation there does reduce the effect. I guess at this point 8-bit controllers are pretty much "done" and it's not as likely that even new bugs are found in the compiler. I'm not sure if you can easily override the compiler mcu definitions for unsupported devices, but the commits needed aren't huge ( https://github.com/gcc-mirror/gcc/commit/919822adc923b00e995e8a148bdb8115a794b47b )
For me the biggest dismay was a (commercial) project that had been developed using the Microchip proprietary libc instead of avr-gcc, and consequently is now stuck on Microchip-rebranded gcc 5.4 in all its glory.
Yes, I know, I wrote bare metal code for Kinetis MKL{02,03,04,05,17,25} and MK02 microcontrollers, I did lots of DMA hacking on the MK20 and MK22, and I wrote bare-metal code for the MSP430 (using gcc). I also wrote drivers (https://github.com/jwr/kinetis_i2c), because the vendor code sucked so badly. But I'm tired of doing this.
It's refreshing to be able to use the same "ecosystem" (for lack of a better word) across vendors. I like the fact that I can use a Kinetis μC one day, an STM the next, and a Nordic the day after that, and most of my code can remain similar.
Yes, for stuff like platform-specific DMA work I would probably need to write code myself (although some vendors, like Nordic, do provide Zephyr drivers for everything on their chips, obviously with varying quality). But I had to do this anyway. With Zephyr, I don't expect to get an all-encompassing cross-platform abstraction, I just want the basics covered, so that I maybe don't have to write any more I2C drivers (I've written several and published three so far). And so that I can remember how to actually build a project and flash it to my board.
I used PICs for years, but the predatory pricing of the optimizing compilers was too much. I still occasionally use them for 5V-only designs, but the STM32 parts provide so much more bang for the buck. The toolchains are better, too. I just wish they would address the serious errata.
Yeah, 8-bit PIC made little sense for hobbyists the moment AVR showed up. In commercial settings, I think PICs had lower prices, but right now, they seem to be positioned as some sort of a premium chip for automotive uses? Dunno. I was never a fan.
I find myself reaching for the ESP32 family fairly often, especially for one-off projects. It's useful to have the monitoring and "remote" programming capabilities that come with WiFi. Additionally, the ESP32 family can map most peripherals to any pin, which is really handy. Although, given the complexity of the networking stack, I don't think of applications written on ESP32 as being quite as reliable as projects on e.g. an STM32. This has gotten better over time, but is still somewhat an issue.
It is very costly for battery-operated devices. But yeah, for plugged-in gear, especially home automation, seamless connectivity is a solid selling point.
How costly? Dollars, not really.
Power, not really either if you use sleep correctly, plus if you are prepared to have a gateway, ESP NOW saves about 3 seconds of TCPIP IP address acquisition every powerup - you can be up and down again, having sent data, in 35ms, this saves heaps of power.
To the point I have an ESP32 temp sensor that sends a reading every ten minutes and maintains battery charge indoors with a 50mmx50mm solar cell, indefinately. Deep sleep is around 10uA.