Tag Archives: environmental monitoring

Sync RTC time to a GPS & Setting the DS3231 Aging Offset Register to Reduce Clock Drift

Here the UNO supplies 5v to the (regulated 50mA) NEO6M and 3v to the RTC. The 5v UNO has no trouble responding to 3v signals from the Neo & RTC without level shifters. White wires to D11/D12 are GPS Rx&Tx via SoftwareSerial. The tantalum caps just stabilize the supply voltages which tend to be noisy on breadboards.

So far in 2024 we’ve released DIY calibration procedures for light, pressure & NTC temperature sensors that can be done without expensive reference equipment. Important techniques; but with long environmental time-series the 500 pound gorilla in the room is actually time itself. Precise timekeeping is something of a back-burner issue in the hobbyist world because so few are trying to combine data from multi-sensor arrays. And syncing to an NTP server is easy when you are already using wireless coms – provided that’s ‘good enough’ for your application. However for people who want high accuracy, using NTP is like setting your watch by constantly asking a friend what time it is.

An extension antenna is required or the GPS will never get a signal indoors. With that mounted in a skylight window the FIRST fix took over 30 minutes to acquire. Subsequent fixes took two minutes or less. It will be interesting to see how this $15 combination performs when we service loggers at heavily forested field sites.

Reconstructing a sequence of events in a dynamic earth system is challenging when timestamps are generated by independent and unsynchronized clocks. But our loggers get deployed by the dozen in underwater caves, and even when you are above ground wireless isn’t really a long term option on a single Cr2032. For stand-alone operation we rely on the DS3231 RTC as they usually drift about 30 seconds per year – but there are always a few outliers in each batch that exceed the datasheet spec of about 5 seconds per month (±2ppm) for the -SN and 13 sec/m (±5ppm) for -M (MEMS) chips. These outliers are hard to spot with our usual ‘by hand’ time-setting process over the serial monitor. You can get a set of loggers within 80 milliseconds of each other with that method, but that difference is still annoyingly visible when the LEDs pip. That set me hunting for a better solution.

Paul Stofregens TimeSerial is often suggested but TimeLib.h didn’t play well with the RTC functions already in our code base. Even after sorting that out, and installing Processing to run SyncArduinoClock, I was still initiating the process. So TimeSerial didn’t get me any closer to the perfectly synchronized LEDs I was after.

This NEO-6M module doesn’t have a PPS header, but the indicator LED is driven by the time-pulse to indicate sync status. This provides one pulse per second, synced at the rising edge, for 100msec. Soldering a jumper to the limit resistor lets you to bring that signal over to the UNO with a male Dupont header pin.

SergeBre’s SynchroTime seemed like an all-in-one solution. But even after a decade working with the Arduino IDE, I still made every newbie mistake possible trying to compile that C code for windows. There are simply too many possible editor/ compiler/ plugin combinations to sift through without a lot of mistaken installations, removals & re-installs. I wasted a couple of days before realizing that code was packaged for the QT environment, and when I saw the additional cost I finally had enough. In the end, it took me less time to build my own GPS time-sync code than I spent trying to compile SynchroTime. That’s an important lesson in the difference between something that’s technically open source and a useable solution. Of course I can’t write that without wondering how many feel the same way about this project.

Jump links to the sections of this post:


Step 1: Determine the Optimum Aging Offset

Ds3231 Datasheet pg7: In the ideal case there is no better option than leaving the offset at zero. However, many chips in circulation don’t match this spec: especially the -M chips which can require offsets of (-)40 or more to match a GPS pulse at room temperature. Most of the M’s are slow, needing adjustments from (-)20 to (-)30 at room temp, while most SN’s are usually between -10 and 0, with a few SN’s running fast.

Despite those setbacks, I couldn’t give up this quest knowing that HeyPete had achieved a drift of only 26 seconds with DS3231 offline for 3 years. The key to that spectacular result was tuning the Aging Offset Register before the run. Positive values in this register add capacitance to an array, slowing the oscillator frequency while negative values remove capacitance and increase the main oscillator frequency. The change is different at different temperatures but at 25°C, one LSB adjusts by approximately 0.1ppm(SN) or 0.12ppm(M). The exact sensitivity is also affected by voltage and age so it can only be determined for a given chip empirically. The datasheets also warn not to run noisy PCB traces under the RTC that might induce capacitive/coupling effects on the crystal but many modules on the market ignore this. My ‘rule of thumb’ when servicing loggers in the field is that changing the aging register by ±3 will correct approximately 1 second of clock drift per month when I don’t have access to the resources described in this post. Of course that requires you to have good field notes so you can be sure when the logger’s clock was last set.

In April 2023, ShermanP proposed a method to do this using successive approximations via the Arduino system clock. After considering, and then rejecting, NTP & WWVB, he settled on GPS as the best source and then posted a ready-to-run solution on his GitHub repo: https://github.com/gbhug5a/DS3231-Aging-GPS

Before proceeding further, read the PDF to understand how his code works. The key idea is that “In calibrating the optimum Aging setting, you should duplicate as nearly as possible the power supply voltage the RTC will experience in its application.” although I suspect this is less critical for the crystal based -SN chips than for the MEMS oscillators. Unfortunately battery voltages change significantly with temp. so matching the rail implies you are also doing this RTC adjustment at temps near your expected deployment ranges – which may not be possible. The Cr2032s that power our loggers spend most of their operating life at 3.05v, and the power indicator LED on raw RTC modules pulls enough current that it drops the UNO’s 3.3 volt line down to about 3v. Finished loggers draw only 1-2μA while sleeping so for those I have to add a Schottky 1N5817 inline to drop that supply down to 3.05v during the test.

The rig can be used to test single RTC modules…
[this photo shows the GPS PPS connected to D3]
or finished loggers – provided – you put the ProMini to sleep, or load it with blink so it ignores the I2C traffic sent from the UNO. So we can do these tests & adjustments to loggers at any time after they have gone into service.

Sherman’s code uses the Interrupt Capture Unit so the PPS signal from the GPS must be connected to D8. I put a male Dupont pin on the end of the PPS tap so the UNO connection can be moved easily as the other code in this post requires that connected to D3. When testing the RTC inside assembled loggers, I have to use an alligator clip ( in green above ) for the alarm line which already has a soldered wire connection – so a female Dupont will not fit over that header pin.

It usually takes 20-30 minutes for the adjustment on a -SN chip to reach a stable value, or settle into a pattern toggling the last bit up and down:

Each cycle of Sherman’s code shown above takes five minutes. The test tends to work better when the RTC time is close to actual GPS time, however the test changes the RTC time during the cal process. So you will need re-sync your RTC time to the GPS after the Age offset determination is run. In our version, I’ve added RTC temperature and tweaked the formatting so that it’s easier to graph the output from longer tests. But these are trivial changes.
On this (typical) -M RTC, it took an hour before the 5-minute cycles settled to a stable offset. Later runs of this unit with calcSlope() showed slightly better behavior with a value of -17 but this test might have settled there if I’d left it running longer. Averaging values from the second hour might be the best approach and you want stable temperatures so the RTC isn’t doing TCXO corrections during the test.

Unfortunately the DS3231 has no non volatile memory which means all registers will reset whenever the chip loses power. So I write the optimum offset from this test on the modules battery holder with a white paint marker during the component triage I do before building new loggers. About 2/3 of time I get very similar results when running this test on a completed build that I got from the RTC module by itself before the logger was assembled. However for about 1/3 of the RTCs, forcing the chip to run in low-power mode from Vbat slows down the main oscillator speed up to 2ppm – so the only safe approach is to retest the age register test after assembly. The BBSQW battery power alarm enable (bit 6 of the 0x0E register) must be set when running the RTC on Vbat.

Many have speculated that there are ‘fake’ DS3231 chips in circulation, but with so many showing scratches & scuff marks I suspect the bad chips are actually the result of rough handling during rework/recycling. And with a chip that’s been in production this long, some are bound to be decades old.

SN chips usually settle to a stable offset value within 2-3 cycles but it can take an hour or more before the -M chips produce stable age offset results. About one in 40 of the RTC modules does not settle to a consistent number no matter how long you leave it and I toss those as defective. Occasionally this is because even with the Age register pushed all the way to a max value (±127) the RTC still can not match the GPS pulse. Some of the non-calibratable units have a non functional register – you can write a value to the register and read it back but that has no effect on the output. I suspect that many of these failures are due impact damage after the chip has been dropped. I also reject any RTC where the temperature register is off by more than 3°C because they won’t be able to do the TCXO corrections. The aging register and the temperature adjustments get combined at the load capacitor bank to tweak the main oscillator, so aging register changes won’t get applied until the next (64 sec) temperature correction unless you also trigger a manual conversion. Just by chance, about one in 25 of the -SN chips keeps almost perfect time compared to the GPS with the register left at the zero default. For now I’m keeping those aside as secondary reference units.

Units per Bin VS Aging offset to match GPS pulse: 140(M) & 85(SN) RTC modules. These were selected at random from multiple eBay vendors, and tested as unmodified modules powered through Vcc at 3.05v. Six of the SN’s had zero offset and I divided those equally into the ±5 bins. Interestingly, while the SN’s are much better behaved as a group, that chip type also had the most extreme outliers with about ten either DOA/unstable or requiring extreme adjustment. I suspect crystal damage explains this observation as there was only one DOA in the batch of M chips.

To leave room for a decent range of TCXO correction, and with ±2ppm short-term wander (on the Mems chips) the aging register should only be used to compensate for about 6-7 ppm of baseline offset. I try not to use a module where the aging register correction to match the GPS is more than ±50.


Step 2: Synchronize RTC time with a Neo6M GPS

Most clock projects use NTP, but there are a few that go that extra mile and synchronize to GPS. One that caught my attention was: Super-Accurate GPS-Corrected RTC Clock – without Internet NTP He avoided the serial bus latency of those pokey 9600 baud coms by preloading variables with GPS time +1second and then waiting for the next GPS pulse before setting the RTC registers. With this concept in hand, and TinyGPS++ to parse the NMEA strings, it didn’t take long to whip up my own version for our loggers. It’s worth noting that several forums mentioned NMEA messages can exceed the 64byte buffer in SoftwareSerial so I increased this to 128 bytes by editing the file at: C:\Program Files (x86) \Arduino \hardware \arduino \avr \libraries \SoftwareSerial \src

Another hidden gotcha is that GPS time can be out by 2 or 3 seconds until it receives a ‘leap seconds’ update which is sent with the Almanac every 12.5 minutes. So wait until the sync LED has been blinking for 15 minutes before setting your clock time as I don’t (yet) have an error catch for this. Our RTC time-sync code displays how much adjustment was done to the RTC time and checks the latency between the GPS pulse and the new RTC time immediately after the sync. That difference is often less than 30 microseconds, but it increases from there if you leave the system running:

Note: If you just ran the aging register test in Step1 you will need to move the PPS signal jumper from D8 to D3 before running RTC2_SyncDS3231Time2GPS The RTC alarm output stays on D2. Occasionally the process spans a transition, so if you see the RTC seconds at anything other than GPSsec+1, de-power the rtc and run it again. The RTCs internal countdown only restarts if the seconds register gets changed, so multiple runs will not reduce the lag once the time has been synced. For some reason I haven’t identified yet, the 1Hz output from -M chips often ends about 1 millisecond before the GPS pulse, producing a negative lag value after sync (because the first ‘partial’ interval is too short?)

You still have to edit the code by hand for your specific local-time adjustment but everything is well commented. Most scientists run their loggers on UTC which matches GPS default time so that local time tweak can be commented out.

The external antenna connection was pretty flakey until I secured it with hot glue.

One small issue with having to run these test utilities with the RTC at 3.05v is that you’ll need to change the battery before deploying the logger. To preserve the clock time, connect the logger to a UART so the RTC is powered continuously during any battery swaps. After the time-sync & new battery, the normal procedure is to load a logger with its deployment code which has a start menu option to set the RTCs aging offset. This gets saved into the 1k EEprom on 328p processor and once set, the base-code automatically reloads that value from the EE into the RTC’s aging register at each runtime startup. After that’s done the logger is ready to deploy – so Step 3 below is only for those who want to explore the DS3231 RTCs drift behavior in more detail.


Step 3: Testing and Verifying Clock Drift

Now that we have the aging offset, and the RTC is synced to GPS time, how do we verify what we’ve done?

HeyPete ran multi-unit drift tests on the same breadboard with all of the RTCs responding to a single I2C master. I’m tempted to try this approach to drift testing of the other sensors like the BMP280, or the BH1750 although I might need to add a TCA9548 I2C multiplexer.

One method is simply to run the clocks until the drift can be easily measured – but that can take several months. You can get immediate results by enabling the 32kHz output on a DS3231-SN and comparing that to a high accuracy source with an oscilloscope. Ideally, you calibrate to a traceable standard which is at least one decimal place better than your device resolution. Kerry Wong did this with an HP 5350B Microwave Counter and HeyPete uses a Trimble ThunderBolt Timing GPS. There are a few retired engineers out there with universal counters on the bench and for truly dedicated ‘time-nuts‘ only an atomic clock will do. But even then the times from several must be averaged to arrive at a published value, and whenever you achieve better numbers by averaging multiple measurements you obscure potential issues with jitter.

Even if we had that equipment budget, our loggers supply the DS3231 from Vbat to save runtime power which disables the 32kHz. And -M chips don’t support that temp. compensated output no matter how they are powered. So is there any validation test that can be done without expensive kit or the 32khz line?

Actually there is – thanks to the Needle Nose Pliers blog in Tokyo. He developed a method that uses least squares over one minute of aggregated readings to resolve rates of change below 0.02μs/second despite the fact that an UNO sysclock only ticks at 4μs. I wrapped his calcSlope() function with modifications needed for the UNO/NEO6 test rig used here and added an input to change the Aging register before each run. To run the drift checking code from our Github, connect the GPS PPS to D3, and RTC SQW to D2:

Note: drift in ms will increase over time and the ppm values typically vary by ±0.02 (or more for -M chips). 9.999 indicates that the code is still collecting the initial 60 readings required for slope calculation. It usually takes another minute after that for the ppm readings to settle. The 1-second cycle-count lets you know the test duration if you leave a long test running in the background.
Once the initial 60 readings are gathered, the ppm drift calculation can be done. In real world terms, ±2ppm is about 175msec of drift per day and you can see that happening in real time with this output. In the test shown above I was deliberately using an offset far from the one suggested by the Step1 test to see how much that increased the drift rate.

That serial output can then be copied into a spreadsheet to compare the effect that different aging offsets have on the RTC. Here are the results from two five-hour tests of a DS3231-M; first with the the Age register set at zero and then with it set to -17. The clock time was sync’d to GPS time before each test to make the graphs easier to compare with the x axis is seconds: (click to enlarge)

RTC temp. during test —> Msec offset from GPS —> Drift Error PPM

RTC temp during this test: 22°C
Drift: 35msec slow after 5 hours
Average error: +2ppm (with ±2ppm jitter)

At the Age=0 default, this RTC’s 1Hz output was 35 milliseconds behind the GPS pulse after five hours, which would be roughly equivalent to 100 seconds of drift per year. The average error hovered around +2 ppm. This is well within spec for a -M chip as ±5ppm implies up to 155 seconds of drift per year.

Then the Aging register was set to -17 (as determined by the test from Step1) and the drift examination was done again. That same RTC module was now only 0.5 milliseconds behind the GPS pulse after five hours, with the slope-derived error averaging close to zero ppm:

Higher 23°C temps on 2nd test
Less than 1ms of drift in 5h
Average error: close to 0ppm (but jitter is the same ±2ppm)

So with the correct aging offset this -M chip could be expected drift less than a second per year. Of course this only applies near our Step1 testing temperature, but in general: If you found the best aging offset correction, the msec difference between a 1Hz alarm from the RTC and the GPS pulse should change very little over a short test.

Its worth noting there is ±2ppm of jitter in the calculation with that -M chip (above) that is not present for -SN chips. The -SN shown below had a straight linear drift of 20 milliseconds slow over five hours when its Aging register was left at the zero default (that’s about 35 seconds/year or 1ppm), but the same RTC had near zero milliseconds of drift over five hours when the aging offset was set to -21:

RTC temp rise during this test
Msec drift approaching zero after 5h, with TCXO adjustment.
Error ppm ~0, with very little jitter on this -SN chip
Error(ppm) vs Runtime(sec): This drift verification on a DS3231-M was done with an age offset of -33 from the Step1 test. The B term in the Excel linear trendline fit is less than the 0.12ppm/bit register adjustment, confirming that -33 is optimal for this chip. The absolute timing change over this 2.5h test was less than 1/2msec faster than the GPS pulse.

Even with temperature rising 3°C during the test, that -SN stays within a tighter tolerance than the -M. This difference in short-term variability explains why the offset determination settles so quickly with a -SN, but can wander around for some time with a -M. The code used here in Step3 is like a slow verbose version of what’s being done in step1 that shows all the intermediate readings. If you put a linear trendline on the graph of the error PPM from running this test with the offset left at the zero default, you can estimate how much age register adjustment it would take to shift those readings until the average is near zero. The aging offset suggested by the test in Step1 should be close to the result of dividing the ‘b’ term from the y=mx+b trendline fit equation by 0.1ppm(SN) or 0.12ppm(M), and changing the sign.

On his blog, NNP also demonstrated how the two chip variants have different responses to temperature changes:

Temp(°C) vs Time(sec): The RTC modules were exposed to this overall pattern although the final tests were run faster (blue = program, orange = PID actual)
Drift Error (ppm) vs Time (sec): The spikes are due to the fact that the TCXO corrections only occur every 64 seconds on the -N variant, but after that the chip quickly returns to its ±2ppm spec.
While still within the rated ±5ppm the -M shows greater variability. With the -M chips doing corrections every 10 seconds I’m surprised the overall TCXO response takes a longer.

[ images from: http://radiopench.blog96.fc2.com/blog-entry-960.html ]

This confirms what I already suspected from our data: the -SN chips are a much better choice for outdoor environments where temperatures vary over that entire 50°C range. Although the temperature coefficient of the MEMS oscillator is not specified in the datasheet, loggers built with -M chips are probably still fine for stable thermal environments and with a tuned Aging register I’d expect them to drift less than ten seconds per year indoors. There are other insights if you dig into NNP’s blog. For example, drift is also affected by the physical orientation of the chip with respect to gravity. I had no idea it was a problem for all quartz resonators unless the crystal is cut into a special shape to avoid it. This highlights the fact that with so many different factors affecting the RTC, the Aging offset adjustment will never be perfect; you are simply aiming to reduce the ‘average’ drift. These tests are also affected somewhat by the stability of the oscillator on the UNO so we have a chicken & egg thing there.

I will be doing more tests to see what other insights it can give into our ProMini / DS3231 combination. With the ability to synchronize clocks so precisely (in Step 2) you can see the outliers in a group in as little as 24 hours simply by watching the LED blinks. I already do multiple rapid burn-in tests with new loggers as part of pre-deployment testing, so visually checking synchronization during those runs is low-effort way to verify the RTC. One thing I’ve long suspected, but have never seen any actual proof of, is that the process of updating registers and generating alarms also affects the clock time. Perhaps handling the I2C transaction blocks the update of some internal counter? I could test this by setting one logger in a ‘well matched’ group to wake every second, while the others blink at five seconds, and see how many days it takes for the fast blinker to shift out of alignment.

It would be interesting to get a couple of the newer DS3232 / DS3234 chips, and test how much they drift with their TXCO pushed out to 512 seconds for 1µA current, instead of the 64 second default that pushes the DS3231’s up to about 3µA average standby current.


Last Word

With these three tools to wrangle our time series we could see drift as low as five seconds per year from an SN in a stable environment, so our little 328p loggers can finally go toe-to-toe with all those smugly networked ESP32s. I will eventually combine these into a general RTC testing utility, but there are plenty of use cases for each as an isolated step – especially if you were tweaking the code for use with different RTC chips. Likewise, with the Neo being a 3v device I could add a few header pins to our loggers for the serial coms and run everything with the logger alone.

But I’m somewhat sentimental about the original UNOs, so it’s nice to dust them off once and a while. Another factor is that if you run two separate instances of the IDE you can choose a different com ports for each instance. So you can simultaneously have that UNO/GPS combination connected, and a ProMini logger connected via its own UART module. As long as you align the code open in each instance with the appropriate port, you can run those RTC tests on the UNO in the background while you work in the other instance of the IDE. This will be very handy when servicing loggers in the field. I will secure those field calibration rigs with hot glue and make them more compact with a ‘sit-on-top’ protoshield.

The quid pro quo when adjusting the Aging register is that the reduced drift within the tuning temperature range comes at the cost of increasing non-linearity at more extreme temperatures. But the underwater/cave sites we deploy into are quite stable compared to surface conditions, so it’s probably worth the trade. Physical aging rates are not necessarily constant or linear, so I expect that that register will need a yearly update. The first complete generation of fully sync’d & calibrated RTCs will get deployed this fall, so it will be a while before I can check how aging is changed by exposure to real world temperature variation. I’ll be happy if I can get -M’s below 1 second of drift per month under those conditions. I would hope to see the aging stabilize after the first year of operation, in a manner similar to sensor aging.

At the very least, we’ve greatly enhanced our ability to remove any duffers from those cheap eBay parts. I’m still wondering what new sensor possibilities better time discipline might enable but I can already see some interesting labs for the next cohort of e360 students. One of the more challenging things to demonstrate within the constraints of a classroom is the relationship between datasheet error specifications and sensor drift. I’ll set aside a few -M modules for those teaching loggers so the results are more dramatic.


Using a $1 DS3231 Real-time Clock Module with Arduino
A look inside the DS3231 real-time clock by HeyPete
5 Month DS3231 Drift Results at HeyPete.com
Setting the DS3231 Aging register to an optimum value by SermanP
Super-Accurate GPS-Corrected RTC Clock without NTP
Precise measurement of RTC error using GPS from Needle Nose Pliers
How they test Aging Performance in Crystals from Connor Winfield
Choosing the right RTC at Hackaday & module photos at Craft Corner
Comparing DS3231 / PCF8563 / MCP79400 / DS1307 RTCs
A collection of very detailed RTC tests from Dan Drown
And his GPS module measurements Part1, Part2, Part3 and More
An architect’s guide to GPS data formats, Estimating GPS time to FIRST fix
The U-centre program from u-blox, with multiple displays
Can the 60Hz mains frequency be used as a reference?
A Timing-sync Protocol for Sensor Networks
PTP clock synchronization over a WAN backbone
Arduino system clock accuracy [ ±1000ppm], various crystal error specs
RTC seconds/day to ppm drift calculator

Just adding a reminder here: the DS3231 doesn’t have a built-in mechanism to disable alarms after they’ve been set. You can clear the alarm flag to release SQW after it fires, but the alarm will still be armed and will fire again at the next time match – no matter how you set the ‘enable/disable’ bits. The ONLY way to disable alarms on the DS3231 is to load those registers with an ‘invalid’ h/m/s combination that the actual time can never reach (eg: one with minutes/seconds set to 62 or, date set to Feb 31st). You can also set the EOSC bit of the control register to logic 1 which stops the oscillator when the DS3231 is on VBAT power – but you will then be unable to check the clock drift at the next logger download. Halting internal oscillator is the only way to stop the temperature conversions.

From the data sheets you can see that the -M uses half as much power (about 26 milliAmpSeconds/day) as the -SN chip does (45 mAs/d) to do its TXCO corrections however our standard Promini/DS3231-SN module combination usually sleeps around 890 nA while the same logger built with a DS3231-M will sleep closer to 1680nA (when a temp compensation reading is not occurring). A sleeping 328p based ProMini draws ~150nA (reg removed & BOD off) and the 4K AT24c32 EEproms on the modules draw less than 50nA when not being accessed. So the -M chips have more than 2x the ~700nA Ibat timekeeping draw of -SN chips.

How to measure PAR (Photosynthetically Active Radiation) using a BH1750 Lux Sensor

A 3d printed stack of radiation shields goes around the 30mL centrifuge tube housing our 2-module logger. A universal ball joint by DiZopfe was adapted for the leveling mechanism which is critical for the calibration.

Space nerds have an old saying that ‘LEO is half way to the moon…‘ and Arduino hobbyists tend to feel the same way about getting sensor readings displayed on a live IoT dashboard. But that ignores the real work it takes to generate data that’s actually useable. To paraphrase Heinlein: ‘Calibration is half way to anywhere…’ Now that our 2-Part logger is both easy for students to build and robust enough for field use, we can focus on developing sensor calibration methods that are achievable by teachers and researchers in low-resource settings.

Light sensors seem straight forward, with numerous how-to guides at Hackaday, Adafruit, Sparkfun, etc. In reality, light sensors are some of the trickiest ones to actually deploy – which is why so few low-end climate stations include them. This post describes a method for calibrating a Bh1750 lux sensor to estimate Photosynthetically Active Radiation (PAR). Not everyone can afford a LI-COR 190 or Apogee SQ quantum sensor to use as a benchmark, so here we will use a clear-sky model calculation for the cross-calibration despite the dynamic filtering effects of the atmosphere on natural sunlight. Using a diffuser to restore cosign behavior means we can’t calculate PPFD directly from Lux without some y=mx+b coefficients.


Jump links to the sections of this post:


Light Sensor Issue #1: Changing Spectral Distribution

Peak solar irradiance received on any given day varies by latitude and season, as does the overall pattern. Light emitted from the sun has a stable distribution of frequencies, however the spectrum at the earth’s surface varies across the day, with more short wavelengths (blue) around mid day and enriched in longer wavelengths (red) at sunrise & sunset when the rays travel further through the atmosphere. We will avoid this source of error by calibrating with data from the hours around solar noon as determined by the NOAA Solar Calculator. Even with high quality sensors, morning and evening data can be compromised by other factors like condensation which changes the refractive index of lenses and diffusers.

Light Sensor Issue #2: Sensitivity Bands

Average plant response to light as Relative Photosynthetic Efficiency (%) vs Wavelength (nm) compared to Bh1750 Response Ratio vs Wavelength

Lux sensors have a maximum sensitivity near 550nm, mimicking the response of photo-receptors in the human eye. Plants are similarly limited to frequencies that can be absorbed by the various chlorophylls. These two bands have a high degree of overlap so we can avoid the Badder UV/IR-Cut cut filters (420–685nm bandpass) or stack of Roscolux filters that would be needed with photodiodes that respond to a wider range of incoming radiation. The cross-calibration still requires the relative ratio of frequencies within the targeted region to remain stable, so a PAR conversion derived under full sunlight may not be valid under a canopy of tree leaves or for the discontinuous spectra of ‘blurple’ grow-lights.

Light Sensor Issue #3: Dynamic Range

I tested two inexpensive Bh1750 sensor modules, and the diffuser dome that comes with the red ‘Light Ball’ version turned out to be the deciding factor. When powered from a 3v coin cell, these sensors add 8µA to the loggers sleep current if you leave the 622 reg in place and <1µA if you remove it.

Full summer sunlight can exceed 120,000 Lux and there aren’t many sensors in the Arduino ecosystem that handle that entire range. The BH1750 can with registers set to it’s least sensitive configuration. Our logger code already does this because QUALITY_LOW & MTREG_LOW(31) integration take only 16-24 milliseconds, rather than the 120-180ms of power needed for high resolution readings. The data sheet implies that the sensor will flatline before 100,000 lux, but at its lowest sensitivity it delivers reasonable data above 120k, though linearity may be suspect as flux approaches sensor saturation. The sensor also has a maximum operating temperature of 85°C which can be exceeded if your housing suffers too much heat gain. Alternative sensors like the MAX44009, TSL2591 and SI1145 have a similar thermal limits. Like most light sensors, the Bh1750 increases its output readings by a few percent as the sensor warms.

Commercial vs DIY diffusers. Bullseye level indicators are epoxied to the top shield with white JB Marine Weld. The larger 43mm diameter bubble (right) was far more effective than the smaller 15mm (left).

DIY builders often add diffusers made from translucent #7328 Acrylite or PTFE sheets to reduce sunlight intensity into a given sensors range. I tried printing domes with clear PETG and hand sanding them with fine grit to increase the diffusive power. While these did reduce light levels by more than 50%, my DIY diffuser didn’t quite match the smooth overall response seen with the diffusers that came with the round PCB modules. This may have been due to a slight misalignment between the sensor and the focal point of the low-poly dome I could make in Tinkercad. The white dome that comes with the red Bh1750 module reduced peak light levels in full sunlight from the 110k Lux reported by a ‘naked’ sensor to about 40k Lux. Each sensor varied somewhat in its response but I didn’t do any batch testing to quantify this as I was calibrating each sensor directly to the reference model. I initially tried clear JB weld as a sealant but this caused problems: sometimes contracting enough peel parts away from the PCB and yellowing significantly after a couple of weeks of full sun exposure. In later builds I used only a thin coating of silicone conformal, relying on an epoxy seal around the base of the diffuser to provide most of the waterproofing.

Light Sensor Issue #4: Angular Response

Bh1750 Directional Characteristics [Figs 4&5] from the datasheet. Sensor response is different on the two axes so orientation must be labeled on the outside during assembly. The left graph is closer to Lambertian so the sensor gets deployed with its connection pads oriented North – South relative to the suns east-west motion. Based on these curves alone we would expect a ‘naked’ BH sensor to under-report relative to the Lambertian ideal. That is indeed what I observed in our early sensor comparison tests, leading to our selection of round red PCB modules for the calibration because the included diffuser dome compensated nicely.

Lambert’s cosine law describes the relationship between the angle of incidence and the level of illuminance on a flat matte surface as being proportional to the cosine of the zenith angle (as the suns changes position throughout the day). At an incident angle of 60°, the number of photons hitting a sensor surface is half what it would be if the same light source was positioned directly above the sensor. This effect is mathematically predictable, but imperfections, diffraction, and surface reflection means that sensor response tends to diverge from ideal as the angle increases. So manufacturers surround their sensors with raised diffuser edges and recesses on the top surface which change light collection at low sun angles to restore a perfect cosign response. In general, diffusers make the compass orientation of the sensor less likely to interfere with calibration but leveling the sensor is still absolutely required.

Light Sensor Issue #5: Temporal Resolution

Unlike most environmental parameters, light levels can change instantaneously. Most commercial sensors aggregate 1 or 2 second readings into 5 to 15 minute averages. This makes it much easier to estimate energy output from solar panels, or calculate the Daily Light Integral for a crop because both of those use cases are more concerned with area under the curve rather than individual sensor readings. However, in our case of calibrating a sensor against an irradiance model, we must use instantaneous readings so we can exclude data from periods where the variability is high. Averaging would smooth over short term interference from clouds, birds, or overhead wires, potentially leading to bad data in the calibration. We read the BH1750 once per minute at its fastest/lowest resolution.


A Radiation Shield

My original concept was to epoxy the light sensor directly onto the cap and slide separate radiation shields over the housing tube with a friction fit – but that approach suffered excessive heat gain. It took several design iterations to discover that plastics are often transparent to IR – so most of the 3D printed weather station shields you find in maker space won’t work very well. While PLA does block/reflect the visible spectrum, it then re-emits a portion of any absorbed energy as IR which passes right through – turning the central housing tube into a tiny greenhouse. You need to add layers of metal foil to reflect that IR and there must be an air gap between the materials or the heat still crosses by conduction. The process of moving those surfaces away from the logger also meant placing the sensor onto a small raised ‘stage’ that could pass through the upper shield. This allows easier replacement after the sensors expire, or the use of an entirely different sensor without changing the rest of the design. I still don’t know the operating life of these sensors at full sunlight exposure levels.

2″ Aluminum HVAC tape is applied to the IR shield layer. (click to enlarge these photos)
The IR shield slides to about 8mm below the top shield which has holes along the rim to vent heated air.
The sensor stage slides on the vertical rails and passes through the upper shield.
The loggers green cap then pushes the sensor stage into place with a snug click-fit. Foil is wrapped around the logger housing tube.
Three smaller gill shields slide onto the rails, with plenty of aligned holes for vertical airflow through to the top shield.
A lower IR shield is added to the bottom with metal side down to reflect thermal radiation emitted from the ground.

Here are temperature records of two side-by-side loggers with identical 3D-printed shields except that one has the three metal foil layers added and one does not:

Temp (°C) vs Time: Comparison of heat gain with, and without metal foil layers. Measured with the NTC sensor inside the logger housing at the center of the stack. The night time data shows a 0.25°C offset between the two sensors, indicating that they were not normalized before this run.

Interestingly, the 3°C delta seen in my foil vs no foil tests matched the discrepancies identified by Terando et.al in their 2017 paper examining ad hoc Stevenson shields in ecological studies. Air gaps are required for IR reflecting layers to do their job, so most of the foil backed roofing shingles on the market are ineffective because of direct surface contact. Both aluminum and stainless steel foils are common, but aluminum has a lower emissivity than stainless steel, meaning it should reflect more and emit less IR. There are also radiant barrier coating sprays used in industrial settings. High-end weather stations use fan ventilation or helical shields, but those designs may be a bit too complicated for DIY. And even 3D prints from tough materials like PETG or ASA would benefit from coating with something like Krylon UV protectant to extend their lifespan. I’ve also been thinking about adding some infrared cooling paint on the top surface of our weather stations. The challenge with anything that emits in the atmospheres transparency window between wave lengths of 8 and 13 microns is that you get significant accumulation of debris on surfaces in as little as one month of actual deployment: especially in the spring/fall when the surfaces get covered with morning dew which then captures any windborne dust.

I’m still tweaking the shield design as more test data comes in, and hope to compare it to a fan aspirated model soon. Radiation shields are only needed if you want to capture accurate temperatures with the light readings on the same logger. The Bh1750 calibration alone could be done without shields, but mounting the sensor on some kind of flat surface makes it easier to add the required leveling bubble beside the sensor. The tradeoff for preventing solar heat gain is that shields introduce lag in the temperature response.

Pole Mount & Leveling Mechanism

As this is the first of our ‘garden series’ that will be built around the 2-part logger, I created a complete mounting system from a combination of 3D printed parts and PVC pipes. This adjustable leveling mechanism was modified from the Open Source Universal Ball Joint posted on Thingiverse by Arthur ZOPFE.

This socket slides over the end of a 1/2″ PVC pipe. A zip tie through the drilled cross-hole secures the pieces together.
A self standing 30mL centrifuge tube slides snugly into this fitting, again with holes for zip ties.
A large diameter twist ring makes it easy to adjust the sensor assembly while watching the bulls-eye level on the top shield.

This ball & socket approach works well for leveling, but to make the adjustments easier (ie. with less compressive force) I will add an O-ring to the bottom cup for some friction and give.

This ground spike has a foot plate to assist insertion and is asymmetric to provide more contact with the bed. It just barely fits on my Ender3 when printed diagonally. I created this model from scratch in Tinkercad, but the offset idea is not mine. Unfortunately, I saw the original so long ago I don’t know who to credit for it. The pole insert and holes are six-sided because internal 45° slopes can be printed without any supports, and you can simply bridge the internal 1cm top span.

A length of standard 1/2 inch PVC pipe is used for the riser between the spike and the leveling mechanism. Ideal height for temperature sensors is approximately five feet above the ground, usually in a shaded location facing away from the sun.


The Apogee Clear Sky Calculator

With this model we could even attempt a calibration against the shortwave spectrum for a DIY pyranometer, but it’s a much bigger stretch to say the 550nm peak of BH sensitivity is a good proxy for the whole 300 -1300nm band of frequencies.

The Apogee Clear Sky Calculator helps operators of their many light sensor products check if those need to be sent in for re-calibration. When used near solar noon on clear unpolluted days the accuracy is estimated to be ±4 %. We can cross-calibrate the readings from our humble Bh1750 to that model provided we use data from a cloudless day. I’m not sure what the temporal resolution of the ClearSky model is (?) The U.S. Climate Reference Network generally uses two-second readings averaged into five minute values so it is likely that the ClearkSky model has a similar resolution. This model has the best accuracy within one hour of solar noon but we will push that out a few hours so we have enough data for the regression.

We could have used the Bird Clear Sky Model from NREL, with validation against real world data from one of the local SURFRAD stations at NOAA. That data is for full-spectrum pyranometers measuring in W/m2, but you can estimate the PAR as photosynthetic photon flux density (PPFD) from total shortwave radiation using a conversion factor into µmol s-1 m-2. Many solar PV companies provide online calculators for power density that could also be used for this kind of DIY sensor calibration.

Our Deployment Location

Most who live in urban areas are familiar with noise pollution, however it is also hard to find undisturbed light environments. My best option for those critical hours around solar noon was my neighbours backyard garden:

The two sensors here are aligned on the east-west axis so they can be compared.

This location was relatively free of power lines and tree tops, but reflections from that white door caused a slight positive offset in the afternoon. Fences prevented the capture of morning and evening data which would have been interesting. But sunrise to sunset data is not required for our calibration.

The Calibration

After several weeks of logger operation we finally managed to capture data from a beautiful cloud-free day:

2024-07-27: Lux from a diffused ‘Light Ball’ Bh1750 sensor (Orange, left axis @1min) VS ClearSky Model PPFD (Purple/right Axis @ 5min). You can see some stair-stepping in the model data, indicating that it’s temporal resolution might be only 10-15 minutes.

We logged raw single-shot Lux readings at one minute intervals and because there is no averaging applied you can clearly see where overhead lines or birds created occasional short-duration shading. These outliers were excluded before generating the trendline shown below. The PAR values from the model were calculated using the ‘Auto fill’ option for humidity and temperature. On this day solar noon was at 12:57

Linear y=mx+b fit between ClearSkyCalculator PPFD (yAxis) vs Diffused BH1750 Lux (xAxis) using 5 minute data points on 2024-07-27 between 10:00 and 16:00 [bracketing solar noon by three hours]. Two shadow outliers at 10:05 and10:15am were excluded from the dataset.

Aerosols and variations in local temp/humidity produced some scatter but this is a good result for calibration with natural light. The result might be improved by co-deploying a humidity sensor, but it’s not clear to me if humidity at ground level is what the model actually uses for its calculation. Some scatter is also being created by the temporal resolution of the model. Using one type of sensor as a proxy for another limits the scope of the device and we probably approached an accuracy of ±15% at best with this conversion. It’s worth remembering that most commercial light sensors are only calibrated to ±5%.


Discussion

The biggest challenge at our mid-west location was that we had to run the loggers for several weeks before capturing the blue-sky day shown above. Typical time series from that Bh1750 sensor (under a light-reducing diffuser dome) looks like this:

Lux vs Time: 1 minute data captured with our 2-Part logger reading a red ‘light-ball’ Bh1750 module.
This unit had an extra 64k EEprom added to store the large amount of data that was generated.

Clouds often cause light levels to exceed that seen on clear days. This makes sense if you imagine a situation where there are no clouds directly over-head, but radiation reflected from the sides of clouds is reaching the sensor from multiple directions. The fact that clouds at different atmospheric levels have different effects is one of the things that makes climate models so complicated.

The Clear-Sky Calculator lets you generate data for any date/time, so it would be possible to do this calibration by aggregating cloudless periods from multiple days:

Detail of data from 7/15 and 7/12: what you are looking for is the smooth curve that indicates there were no high level clouds causing subtle variations in light level.
Inexpensive (~$60USD) PAR meters have started appearing on Amazon recently. I’m more than a little dubious about the term ‘quantum’ in the marketing (?) as they are probably just a photodiode and some filters

Someone in Nevada would have no trouble gathering this kind of calibration data, but it might not be possible for people living in Washington. A low-cost alternative to using a clear-sky model for the calibration could be to compare the Bh1750 to one of the many smartphone grow light meter apps, with a clip-on diffuser & cosine corrector. Every phone has a different sensor so programs like Photone or PPFDapp usually have their own calibration procedures. While developing this exercise I also found a ‘for parts’ Seaward Solar Survey 100 on eBay for $20, and all it needed to bring it back to life was a good cleaning inside. I also found an old Li-1400 logger with a 190 pyranometer for only $120 and was pleasantly surprised when Apogee’s calculator showed it was still within 5%. As mentioned, you’d need to convert total radiation from those last two into PAR or you could do the calibration to total shortwave. Hardware references that lack logging capability require more effort to gather calibration points, but they save you from having to wait for agreeable weather.

Other projects have built similar sensors and with calibration Lux sensors are comparable to commercial PAR sensors if the spectral environment is consistent. Multi-channel sensors with overlapping frequencies do a better job in situations with discontinuous light sources like those used for indoor growing or for measuring the extinction of PAR frequencies under water. In those cases a TCS3471 (3-channel), AS7341(10-channel), or AS7265 (18-channel) sensor could be used, and finer frequency division can enable calculation of interesting ratios like NDVI or SPAD. Beyond that point you’re entering the realm of diffraction grating spectrometers which allow a more nuanced approach to the spectral function which differs from standard PAR.

And if building your own datalogger is too challenging, you could reproduce the exercise described in this post with a bluetooth UNI-T or a UT381 Digital Luminometer which has some logging capability. But you will need to add extra diffusers to bring full sunlight down below its 20,000 Lux limit.


NREL Bird Clear Sky Model
Clear Sky Calculator from Apogee Instruments
NOAA SURFRAD data from irradiance measuring stations
Downloading from the National Solar Radiation Database.
Shortwave Radiation by Steve Klassen & Bruce Bugbee
Fondriest Solar Radiation & Photosynthetically Active Radiation
Designing a Low-Cost Autonomous Pyranometer by Peter van der Burgt
Various DIY PAR meter discussions at Planted Tank
Build Your Own Pyranometer by David Brooks
Ad hoc instrumentation methods in ecological studies produce biased temperature measurements. Terando et al. (2017)
Choosing Standard Bulbs for DIY PAR meter calibration
Daily Light Integral requirements for different plants.
PARbars: Cheap, Easy to Build Ceptometers
Creating a Normalized Vegetation Index Sensor with two LEDs
Hacking the Rubisco enzyme boosts crop growth 40%
Plants recycle UV into red light
How to calibrate NTC thermistors
How to build our 2-Part ProMini Data Logger

How to Normalize a Group of Pressure Sensors so they can be Deployed as a Set

Once your project starts to grow it’s common to have multiple different sensors, from different vendors, measuring the same environmental parameter. Ideally, those sensors would produce the same readings but in practice there are significant offsets. Datasheets for the MS5837-02BA and MS5803-14BA that we will compare in this post claim an accuracy of (±0.5mbar) and (±2ºC) for the 2-bar while the 14-bar sensors are only rated to (±20mbar) and (±2ºC). Sensors from Measurement Specialties are directly code compatible so the units here were read with the same Over Sampling settings.

Barometric pressure from a set of nine MS58xx pressure sensors running on a bookshelf as part of normal burn-in testing. The main cluster has a spread of about 10millibar, with one dramatic outlier >20 mbar from the group. These offsets are much wider than the datasheet spec for those 2-bars sensors.

But this is only a starting point: manufacturers have very specific rules about things like the temperature ramps during reflow and it’s unlikely that cheap sensor modules get handled that carefully. Housing installation adds both physical stress and thermal mass which will induce shifts; as can the quality of your supply voltage. Signal conditioning and oversampling options usually improve accuracy, but there are notable exceptions like the BMP/E 280 which suffers from self-heating if you run it at the startup defaults.

As described in our post on waterproofing electronics, we often mount pressure sensors under mineral oil with a nitrile finger cot membrane leading to thermal lag.

Sensors like NTC thermistors are relatively easy to calibrate using physical constants. But finding that kind of high quality benchmark for barometric sensors is challenging if you don’t live near a government-run climate station. So we typically use a normalization process to bring a set of different sensors into close agreement with each other. This is a standard procedure for field scientists, but information on the procedures is hard to find because the word ‘normalization’ means different things in various industry settings. In Arduino maker forums it usually describes scaling the axes from a single accelerometer with (sensor – sensor.min )/( sensor.max – sensor.min ) rather than standardizing a group of different sensors.

When calibrating to a good reference you generally assume that all the error is in your cheap DIY sensor and then do a linear regression by calculating a best fit line with the trusted data on they Y axis of a scatter plot.  However, even in the absence of a established benchmark you can use the same procedure with a ‘synthetic’ reference created by drawing an average from your group of sensors:

Note: Sensor #41 was the dramatic outlier more than 20millibar from the group (indicating a potential hardware fault) so this data is not include in the initial group average.

With that average you calculate y = Mx + B correction constants using Excel’s slope & intercept functions. Using these formulas lets you copy/paste equations from one data column to the next which dramatically speeds up the process when you are working through several sensors at a time. It also recalculates those constants dynamically when you add or delete information:

The next step is to calculate the difference (residuals) between the raw sensor data and the average: before and after these Y=Mx+B corrections have been applied to the original pressure readings. These differences between the group average and an individual sensor should be dramatically reduced by the Mx+b adjustments:

After you copy/paste these calculations to each sensor, create x/y scatter plots of the residuals so you can examine them side-by-side:

Now we can deal with the most important part of the entire process: Normalization with bad input data will produce even more misleading results. While the errors shown above are centered around zero, the patterns in these graphs indicate that we are not finished. In the ideal case, residuals should usually be soft fuzzy distributions with no observable patterns. But here we have a zigzag that is showing up for most of the sensors. This is an indication that one (or more) of the sensors included in the average has some kind of problem. Scrolling further along the columns identifies the offending sensors with nasty looking residual plots after the corrections have been applied:

Sensor #41 (far right) was already rejected from the general average because of its enormous offset, but the high amplitude jagged residual plots indicate that the data from sensors #45 and #42 are also suspect. If we eliminate those two from the reference average the zigzag pattern disappears from the rest of the sensors in the set:

There’s more we could learn from the residual distributions, but here we’ve simply used them to prune our reference data, preventing bad sensor input from harming the the average we use for our normalization.

And what do the sensor plots look like after the magic sauce is applied?

The same set of barometric pressure sensors, before and after normalization corrections. (minus #41 which could not be corrected)

It’s important to note that there is no guarantee that fitting your sensors to an average will do anything to improve accuracy. However, sensors purchased from different vendors, at different times, tend to have randomly distributed offsets. In that case normalization improves both precision and accuracy, but the only way to know if that has happened is to validate against some external reference like the weather station at your local airport. There are several good long term aggregators that harvest METAR data from these stations like this one at Iowa State, or you can get the most recent week of data by searching for your local airport code at weather.gov

METAR is a format for weather reporting that is predominately used for pilots and meteorologists and they report pressure adjusted to ‘Mean Sea Level’. So you will have to adjust your data to MSL (or reverse the correction on the airport data) before you can compare it to the pressure reported by your local sensors. For this you will also need to know the exact altitude of your sensors when the data was gathered to remove the height offset between your location and the airport stations.

Technically speaking, you could calibrate your pressure sensors directly to those official sources. However there are a lot of Beginner, Intermediate and Advanced details to take care of. Even then you still have to be close enough to know both locations are in the same weather system.

Here I’m just going to use the relatively crude adjustment equation:
Station Pressure = SLP – (elevation/9.2) and millibar = inchHg x 33.8639 to see if we are in the ballpark.

Barometric data from the local airport (16 miles away) overlayed on our normalized pressure sensors. It’s worth noting that the airport data is at a strange odd-minute intervals, with frequent dropouts which would complicate a calibration to that reference.

Like most pressure sensors an MS58xx also records temperature because it needs that for internal calculation. So we can repeat the entire process with the temperature readings from this sensor set:

Temperatures °C from a set of MS58xx Pressure sensors: before & after group normalization. Unlike pressure, this entire band was within the ±2ºC specified in the datasheet.

These sensors were sitting pretty far back on a bookshelf that was partly enclosed, so some of them were quite sheltered while others were exposed to direct airflow. So I’m not bothered by the spikes or the corresponding blips in those residual plots. I’m confident that if I had run this test inside a thermally controlled environment (ie: a styrofoam cooler with a small hole in the top) the temperature residuals would have been well behaved.

One of the loggers in this set had a calibrated NTC thermistor onboard. While this sensor had significant lag because it was located inside the housing, we can still use it to check if the normalized temperatures benefit from the same random distribution of errors that were corrected so nicely by the pressure normalization:

Once again, we have good alignment between a trusted reference (in red) and our normalized sensors.

Comments:

Normalization is a relatively low effort way to improve sets of sensors – and it’s vital if you are monitoring systems that are driven primarily by gradients rather than absolute values. This method generalizes to many other types of sensors although a simple y=Mx +B approach usually does not handle exponential sensors very well. As with calibration, the data set used for normalization should span the range of values you expect to gather with the sensors later on.

The method described here only corrects differences in Offset [with the B value] & Gain/Sensitivity [the M value] – more complex methods are needed to correct non-linearity problems. To have enough statistical power for accuracy improvement you want a batch of ten or more sensors and it’s a good idea to exclude data from the first 24 hours of operation so brand new sensors have time to settle. Offsets are influenced by several factors and some sensors need to ‘warm up’ before they can be read. The code driving your sensors during normalization should be identical to the code used to collect data in the field.

All sensor parameters drift so, just like calibration, normalization constants have a shelf life. This is usually about one year, but can be less than that if your sensors are deployed in harsh environments. Fortunately this kind of normalization is easy to redo in the field, and it’s a good way to spot sensors that need replacing. You could also consider airport/NOAA stations as stable references for drift determination.


References & Links:

Decoding Pressure @ Penn State
Environmental Mesonet @ Iowa State
Calibrating your Barometer: Part1, Part2 & Part3
How to Use Air Sensors: Air Sensor Guidebook
ISA Standard Atmosphere calculator
Starpath SLP calculator
SensorsONE Pressure Calculators
Mean Sea Level Pressure converter

How do you deal with I2C bus resistance/capacitance issues with so many sensors connected?

I have to add a special mention here of the heroic effort by liutyi comparing different temp. & humidity sensors. While his goal was not normalization, the graphs clearly demonstrate how important that would be if you were comparing a group of sensors. Humidity sensors have always been a thorn in our side – both for lack of inter-unit consistency and because of their short lifespan in the field relative to other types of sensors. The more expensive Sensirons tend to last longer – especially if they are inside one of those protective shells made from sintered metal beads. KanderSmith also did an extensive comparison of humidity sensors with more detailed analysis of things like sensor response time.

You can use the map function to normalize range sensors where both the upper and lower bounds of the sensor varies. And you can use Binary Saturated Aqueous Solutions as standards.

How to calibrate NTC thermistors (A DIY method you can do at home)

This post describes a thermistor calibration achievable by people who don’t have access to lab equipment with an accuracy better than ±0.15°C. This method is particularly suitable for the 10k NTC on our 2-module data logger handling them in a way that is easy to standardize for batch processing (ie: at the classroom scale). We use brackets to keep the loggers completely submerged because the thermal conductivity of the water around the housing is required or the two sensors would diverge. The target range of 0° to 40°C used here covers moderate environments including the underwater and underground locations we typically deploy into. This method is unique in that we use a freezing process rather than melting ice for the 0°C data point.

Use stainless steel washers in your hold-downs to avoid contamination of the distilled water and provide nucleation points to limit super-cooling. Before creating this bracket we simply used zip-ties to hold the washer weights.

Reading a thermistor with digital pins uses less power, and gives you the resistance of the NTC directly from the ratio of two Interrupt Capture Unit times. Resolution is not set by the bit depth of your ADC, but by the size of the reservoir capacitor: a small ceramic 0.1µF [104] delivers about 0.01°C with jitter in the main system clock imposing a second limit on resolution at nearly the same point. Large reservoir capacitors increase resolution and reduce noise but take more time and use more power. The calibration procedure described in this post will work no matter what method you use to read your NTC thermistor.

The I2C reference sensor is connected temporarily during the calibration via Dupont headers. Always give your reference sensors serial numbers so that you can normalize them before doing the thermistor calibrations.

Off-the-shelf sensors can be used as  ‘good enough’ reference thermometers provided you keep in mind that most accuracy specifications follow a U-shaped curve around a sweet spot that’s been chosen for a particular application. The Si7051 used here has been optimized for the medical market, so it has ±0.1° accuracy from 35.8 to 41° Celsius, but that falls to ±0.13° at room temperatures and only  ±0.25° at the ice point. If you use some other reference sensor (like the MAX30205 or the TSYS01) make sure it’s datasheet specifies how the accuracy changes over the temperature range you are targeting with the calibration.

The shortened Steinhart–Hart equation used here is not considered sufficiently accurate for bench-top instruments which often use a four or five term polynomial. However in ‘The Guide on Secondary Thermometry‘ by White et. al. (2014) the three-term equation is expected to produced interpolation errors of about 0.0025°C over a range from 0 to 50°C, and that is acceptable for most monitoring. To calculate the three equation constants you need to collect three temperature & resistance data pairs which can be entered into the online calculator at SRS or processed with a spreadsheet.

While these technical sources of error limit the accuracy you can achieve with this method, issues like thermal lag in the physical system and your overall technique are more important. In general, you want each step of the calibration process to occur as slowly as possible. If the data from a run doesn’t look the way you were expecting – then do the procedure over again until those curves are well behaved and smooth. Make sure the loggers stay dry during the calibration – switching to spare dry housing tubes between the baths: Moisture is the greatest cause of failure in sensors and humidity/water always lowers the resistance of thermistors. If in doubt, let everything dry for 24 hours before re-doing a calibration.

Data Point #1: The freezing point of water

The most common method of obtaining a 0°C reference is to place the sensor into an insulated bucket of stirred ice slurry that plateaus as the ice melts. This is fine for waterproof sensors on the end of a cable but it is not easily done with sensors mounted directly on a PCB. So we immerse the loggers in collapsible 1200ml silicone food containers filled with distilled water. This is placed inside of a well insulated lunch box and the combined assembly is left in the freezer overnight, reading every 30 seconds.

Weighted holders keep each logger completely immersed. Soft-walled silicone containers expand to accommodate any volume change as the water freezes. This prevents the centrifuge tube housings from being subjected to pressure as the ice forms. Position the loggers so that they are NOT in direct contact with the sides or the lid of the silicone container.
The outer box provides insulation to slow down the freezing process. After testing several brands it was found that the Land’s End EZ wipe (9″x8″x4″) and Pottery Barn Kids Mackenzie Classic lunch boxes provided the best thermal insulation because they have no seams on the solid molded foam interior which also doesn’t absorb water spilled while moving the containers around.

For the purpose of this calibration (at ambient pressure) we can treat the freezing point of pure water as a physical constant. So no reference sensor is needed on the logger while you collect the 0°C data. Leave the lunch box in the freezer just long enough for a rind of ice to form around the outer edges while the main volume of water surrounding the loggers remains liquid. I left the set in this photo a bit too long as that outer ice rind is much thicker than it needed to be for the data collection. Do not let the water freeze completely solid (!) as this will subject the loggers to stress that may crack the tubes and let water in to ruin your loggers.

The larger bubbles in this photo were not present during the freeze, but were created by moving the container around afterward for the photo.

The trick is recognizing which data represents the true freezing point of water. Distilled water super-cools by several degrees, and then rises to 0°C for a brief period after ice nucleation because the phase change releases 80 calories per gram while the specific heat capacity of water is only one calorie, per degree, per gram. So freezing at the outer edges warms the rest of the liquid – but this process is inherently self-limiting which gives you a plateau at exactly 0°C after the rise:

NTC (ohms) gathered during the freeze/thaw process graphed with the y axis is inverted because of the negative coefficient. The warm temperature data has been removed from the graphs above to display only the relevant cold-temperature data. Only the 10-20 minutes of data immediately after the rise from a super cooled state is relevant to the calibration. Cooling the insulated chamber from its room temperature starting point to the supercooling spike shown above took 7-8 hours.

Depending on the strength of your freezer, and the quality of the outer insulating container, the ice-point may only last a few minutes before temperatures start to fall again. An average of the NTC readings from that SHORT plateau immediately after the supercooling ends is your 0°C calibration point.  This is usually around 33000 ohms for a 10k 3950 thermistor. Only the data immediately after super cooling ends is relevant and the box can be removed from the freezer any time after that event. I left the example shown above in the freezer too long but you have a reasonable window of time to avoid this. Once the freeze process initiates, it usually takes about 8 hours for the entire volume to freeze solid – after which you can see the compressor cycling as the now solid block cools below 0°C. You want to pull the sensors out of the freezer before that solid stair-step phase (at 8:00 above) if possible.

If the supercooling spike is not obvious in your data then change your physical configuration to slow the cooling process until it appears. You want the inner surface of your silicone container to have smooth edges, as sharp corners may nucleate the ice at 0°C, preventing the supercooling spike from happening. Use as much distilled water as the container will safely hold -the loggers should be surrounded by water on all sides.

In this image a freezer compressor cycle happened during post supercooling rise making it hard to see where the plateau occurred. This run was re-done to get better data.

Most refrigerators cycle based on how often the door is opened and those cycles can overprint your data making it hard to interpret. If you put a room-temperature box of water in the freezer between 6-7pm, it usually reaches the supercooling point around 2am, reducing the chances that someone will open the refrigerator/freezer door at the critical time. Even then, unexpected thermal excursions may happen if the freezer goes into a defrost cycle or an automatic ice-maker kicks in during the run. The time to reach that supercooling event can be reduced by pre-cooling the distilled water to ~5°C in the refrigerator before the freezer run. If any of the points on your curves are ambiguous, then do that run again, making sure the water is completely ice free at the start.

As a technical aside, the energy released (or absorbed) during the phase change of water is so much larger than its typical thermal content that water based heat pumps can multiply their output significantly by making slushies.

Data Point #2:  Near 40°C

We have used the boiling point of water for calibration in the past, but the centrifuge tube housings would soften considerably at those temperatures. Ideally you want to bracket your data with equally spaced calibration points and 100°C is too far from the environmental conditions we are targeting. Heated water baths can be found on eBay for about $50, but my initial tests with a Fisher Scientific IsoTemp revealed thermal cycling that was far too aggressive to use for calibration – even with a circulation pump and many layers of added insulation. So we created an inexpensive DIY version made with an Arctic Zone Zipperless Coldloc hard-shell lunch box and a 4×6 inch reptile heating mat (8 watt). Unlike the ice point which must be done with distilled water, ordinary tap water can be used to collect the two warm temperature data pairs.

These hard-sided Arctic Zone lunch boxes can often be obtained for a few dollars at local charity shops or on eBay.
Place the 8-watt heating pad under the hard shell of the lunch box. At 100% power this tiny heater takes ~24 hours to bring the bath up to ~38°C. The bath temp is relatively stable since the heater does not cycle, but it does experience a slow drift based on losses to the environment. These heating pads sell for less than $15 on Amazon.

To record the temperature inside each logger, an Si7051 breakout module (from Closed Cube) is attached to the logger. A hold down of some kind must keep the logger completely submerged for the duration of the calibration. If a logger floats to the surface then air within the housing can thermally stratify and the two sensors will diverge. That data is not usable for calibration so the run must be done again with that logger.

The reference sensor needs to be as close to the NTC sensor as possible within the housing – preferably with the chip directly over top and facing the NTC thermistor.

Data Point #3: Room Temperature

The loggers stay in the heated bath for a minimum of 4 hours, but preferably 8 -12 hours. The idea is you want the whole assembly to have enough time to equilibrate. Then they are transferred to an unheated water-filled container (in this case a second Arctic Zone lunch box) where they run at ambient temperatures for another 8 -12 hours. This provides the final reference data pair:

Si7051 temperature readings inside a logger at a 30 second sampling interval. The logger was transferred between the two baths at 8am. Both baths are affected by the temperature changes in the external environment.
Detail: Warm temp. NTC ohms (y-axis inverted)
Detail: Room temp. NTC ohms (y-axis inverted)

As the environment around the box changes, losses through the insulation create gentle crests or troughs where the lag difference between the sensors will change sign. So averaging several readings across those inflection points cancels out any lag error between the reference sensor and the NTC. Take care that you average exactly the same set of readings from both the Si7051 and from the NTC. At this point you should have three Temperature / Resistance data pairs that can be entered into the SRS online calculator to calculate the equation constants ->

I generally use six digits from the reference pairs, which is one more than I’d trust in the temperature output later. I also record the Beta constants for live screen output because that low accuracy calculation takes less time on limited processors like the 328p.

The final step is to use those constants to calculate the temperature from the NTC data with:
Temperature °C = 1/(A+(B*LN(ohms))+(C*(LN(ohms))^3))-273.15

Then graph the calculated temperatures from the NTC calibration readings over top of the reference sensor temperatures. Provided the loggers were completely immersed in the water bath, flatter areas of the two curves should overlap one another precisely. However, the two plots will diverge when the temperature is changing rapidly because the NTC exhibits more thermal lag than the Si7051. This is because the NTC is located near the thermal mass of the ProMini circuit board.

Si reference & NTC calculated temperatures: If your calibration has gone well, the curves should be nearly identical as shown above. With exceptions only in areas where the temperature was changing rapidly and the two sensors got out of sync because of different thermal lags.

Also note that the hot and warm bath data points can be collected with separate runs. In fact, you could recapture any individual data pair and recalculate the equation constants with two older ones any time you suspect a run did not go smoothly. Add the constants to all of the data column headers, and record them in a google doc with the three reference pairs and the date of the calibration.

Validation

You should always do a final test to validate your calibrations, because even when the data is good it’s easy to make a typo somewhere in the process. Here, a set of nine calibrated NTC loggers are run together for a few days in a gently circulating water bath at ambient temperature –>

(Click to enlarge)

Two from this set are a bit high and could be recalibrated, but all of the NTC temperature readings now fall within a 0.1°C band. This is a decent result from a method you can do without laboratory grade equipment, and the sensors could be brought even closer together by using this validation data to normalize the set.

Comments

The method described above uses equipment small enough to be portable, allowing easy classroom setup/takedown. More importantly this also enables the re-calibration of loggers in the field if you have access to a freezer. This makes it possible to re-run the calibrations and then apply compensation techniques to correct for sensor drift. Validating calibration before and after each deployment is particularly important with DIY equipment to address questions about data quality at publication. Glass encapsulated NTC thermistors drift up to 0.02 °C per year near room temperatures, while epoxy coated sensors can drift up to 10x that.

At the ice-point, our resolution is ~0.0025°C but our time-based readings vary by ±0.0075°C. This is due to timing jitter in the ProMini oscillator and in the interrupt handling by a 328p. So with a [104] reservoir capacitor in the timing circuit, our precision at 0°C is 0.015°C.

Having a physical constant in the calibration data is important because most of the affordable reference sensors in the Arduino landscape were designed for applications like healthcare, hvac, etc. So they are usually designed minimize error in warmer target ranges, while getting progressively worse as you approach 0°C. But accuracy at those lower temperatures is important for environmental monitoring in temperate climates. The method described in this post could also be used to calibrate commercial temperature sensors if they are waterproof.

Calibrating the onboard thermistor a good idea even if you plan to add a dedicated temperature sensor because you always have to do some kind of burn-in testing on a newly built logger – so you might as well do something productive with that time. I generally record as much data as possible during the calibration to fill more memory and flag potentially bad areas in the EEprom. (Note: Our code on GitHub allows only 1,2,4,8, or 16 bytes per record to align with page boundaries) . And always look at the battery record during the calibration as it’s often your first clue that a DIY logger might not be performing as expected. It’s also worth mentioning that if you also save the RTC temperatures as you gather the NTC calibration data, this procedure gives you enough information to calibrate that register as well. The resolution is only 0.25°C, but it does give you a way to check if your ‘good’ temperature sensors are drifting because the DS3231 tends to be quite stable.

While the timing jitter does not change, non-linearity of the NTC resistance reduces the resolution to 0.005°C. Precision at 35°C also suffers, falling to 0.02°C. Using a 10x larger [105] reservoir cap would get us back to resolution we had at 0°C, as would oversampling which actually requires this kind of noise for the method to work. Either of those changes would draw proportionally more power from the coincell for each read so its a tradeoff that might not be worth making when you consider sensor lag.

For any sensor calibration the reference points should span the range you hope to collect later in the field. To extend this procedure for colder climates you could replace the ice point with the freezing point of Galinstan (-20°C) although a domestic freezer will struggle to reach that. If you need a high point above 40°C, you can use a stronger heat source. Using two of those 8 watt pads in one hard sided lunch box requires some non-optimal bending at the sides, but it does boost the bath temp to about 50°C. 3D printed PLA hold-downs will start to soften at higher temps so you may need to alter the design to prevent the loggers from popping out during the run.

If your NTC data is so noisy you can’t see where to draw an average, check the stability of your regulator because any noise on the rail will affect the Schmitt trigger thresholds used by our ICU/timer method. This isn’t an issue running from a battery, but even bench supplies can give you noise related grief if you’ve ended up with some kind of ground loop. You could also try oversampling, or a leaky integrator to smooth the data – but be careful to apply those techniques to both the reference and the NTC in exactly the same way because they introduce significant lag. Temperature maximums are underestimated and temperature minimums are overestimated by any factor that introduces lag into the system. In general, you want to do as little processing to raw sensor readings as possible at capture time because code-based techniques usually require some prior knowledge of the data range & variation before they can be used safely. Also note that our digital pin ICU based method for reading resistors does not work well with temperature compensated system oscillators because that compensation circuitry could kick in between the reference resistor and NTC readings.

And finally, the procedure described here is not ‘normalization’, which people sometimes confuse with calibration.  In fact, it’s a good idea to huddle-test your sensors in a circulating water bath after calibration to bring a set closer together even though that may not improve accuracy. Creating post-calibration y=Mx+B correction constants is especially useful for sensors deployed along a transect, or when monitoring systems that are driven by relative deltas rather than by absolute temperatures. Other types of sensors like pressure or humidity have so much variation from the factory that they almost always need to be normalized before deployment – even on commercial loggers.

Normalize your set of reference sensors to each other before you start using them to calibrate NTC sensors.


References & Links:

SRS Online Thermistor Constant Calculator
Steinheart & Heart spreadsheet from CAS
S&H Co-efficient calculator from Inside Gadgets
Molex Experimenting with Thermistors Design Challenge
Thermistor Calibration & the Steinhart-Hart Equation WhitePaper from Newport
ITS-90 calibrates w Mercury(-38.83), Water(0.01) & Gallium(29.76) Triple Point cells
Guide on Secondary Thermistor Thermometry, White et al. (2014)
Steinhart-Hart Equation Errors BAPI Application Note Nov 11, 2015
The e360: A DIY Classroom Data Logger for Science
How to make Resistive Sensor Readings with DIGITAL I/O pins
Single Diode Temperature Sensors
Measuring Temperature with two clocks
How to Normalize a Set of Sensors

The e360: A DIY Classroom Data Logger for Science [2023]

2023 is the ten-year anniversary of the Cave Pearl Project, with hundreds of loggers built from various parts in the Arduino ecosystem and deployed for Dr. Beddows research. During that time her EARTH 360 – Instrumentation course evolved from using commercial equipment to having students assemble a logging platform for labs on environmental monitoring. The experience of those many first-time builders has been essential to refining our educational logger design to achieve maximum utility from a minimum number of components. So, in recognition of their ongoing and spirited enthusiasm, we call this new model the e360.

A standard 50mL centrifuge tube forms the housing, which is waterproof to about 8 meters depth. For better moisture resistance use tubes with an O-ring integrated into the cap (made of silicone or ethylene propylene) which gets compressed when the threads are tightened.
A bracket for the logger which can be installed horizontally or vertically. Zip ties pass through the central support to wrap around the 50ml centrifuge tube. This prints without any generated supports and the STL can be downloaded for printing from the Github repository for this logger.

Many parallel trends have advanced the open-source hardware movement over the last decade, including progress towards inexpensive and (mostly) reliable 3D printing. In keeping with the project’s ethos of accessibility, we use an Ender 3 for the rails and you can download that printable stl file directly from Tinkercad. Tinkercad is such a beginner-friendly tool that students are asked to create their own logger mounting brackets from scratch as as an exercise in the Lux/LDR calibration lab. This directly parallels our increasing use of 3D prints for installation brackets & sensor housings on the research side of the project.

One of the things that distinguishes this project from others in the open science hardware movement is that instead of constantly adding features like IOT connectivity, we have been iterating towards simplicity. Cheap, flexible, stand-alone loggers enable many teaching and research opportunities that expensive, complicated tools can not. However there are a few trade-offs with this minimalist 2-module design: Supporting only Analog & I2C sensors makes the course more manageable but loosing the DS18b20, which has served us so well over the years, does bring a tear to the eye. Removing the SD card used on previous EDU models means that memory becomes the primary constraint on run-time. The RTC’s one second alarm means this logger is not suitable for higher frequency sampling and UV exposure makes the 50ml tubes brittle after 3-4 months in full sun. Coin cell chemistry limits operation to environments that don’t go far below freezing – although it’s easy enough to run the logger on two lithium AAA‘s in series and we’ve tested those down to -15°C.


Jump links to the sections of this post:


Parts for the lab kit:

The basic logger kit costs about $10 depending on where you get the DS3231 RTC & 3.3v/8MHz ProMini modules. Pre-assembly of the UART cable, NTC cluster & LED can be done to shorten lab time. CP2102 6pin UARTs are cheap, and have good driver support, but you have to make that Dupont crossover cable because the pins don’t align with the ProMini headers.
Sensor modules for lab activities: TTP233 touch, BMP280 pressure, BH1750 lux, AM312 PIR, 1k &10k pots and a sheet of metal foil for the capacitive sensing lab. Other useful additions are a Piezo Buzzer, a 0.49″ OLED and 32k AT24c256 EEproms. The screen is $4, but the other parts should should cost about $1 each.

You can find all the parts shown here on eBay and Amazon – except for the rail which needs to be printed but these days it’s relatively easy to send 3D models out to an printing service if someone at your school doesn’t already have a printer. Expect 15% of the parts from cheap suppliers like eBay or Amazon to be high drain, or simply DOA. We order three complete lab kits per student to cover defects, infant mortality, and replacement of parts damaged during the course. This is usually their first time soldering and some things will inevitably get trashed in the learning process – but that’s OK at this price point. We also order each part from three different vendors, in case one of them is selling rejects from a bad production run. The extra parts allow students to build a second or third logger later on in the course which is often needed for their final project.

I’ve used short jumpers here to make the connections clear, but it’s better use longer wires from a 20cm, F-F Dupont ribbon to make these cables. Only the 3.3v output from the Cp2102 gets connected to the logger.
Cp2102 UARTProMini
DTR->DTR
RXD->TXO
TXD->RXI
GND->GND
3V3->VCC

Macintosh USBc to USBa adapters are smart devices with chips that will shut-down if you unplug from the computer with a battery powered logger is still connected. The coin cell back-feeds enough voltage to put the dongle into an error state. Always disconnect the UART to logger connection FIRST instead of simply pulling the whole string of still-connected devices out of the computer.

After installing OS drivers for your UART, you need to select the IDE menu options:
[1] TOOLS > Board: Arduino Pro or Pro Mini
[2] TOOLS > Processor: ATmega328 (3.3v, 8mhz)
[3] TOOLS > Port: Match the COM# or /dev that appears when you connect the UART

On this UART the 5v connection had to be cut with a knife before soldering the 3.3v pads together to set the output voltage.

For many years we used FT232s, but the current windows drivers will block operation if you get one of the many counterfeit chips on the market. If you do end up with one of those fakes only OLD drivers from 2015 / 2016 will get that UART working with the IDE. To avoid that whole mess, we now use Cp2102’s or Ch340’s. Some UARTs require you to cut or bridge solder pads on the back side to set the 3.3v that an 8MHz ProMini runs on. Many I2C sensor modules on the market also require this lower voltage. Avoid Pro Mini’s with the much smaller 328P-MU variant processors. They may be compatible inside the chip but the smaller solder pad separation makes the overall logger noticeably more susceptible to moisture related problems later.


Assembling the logger:

This e360 model is based on the 2-Module logger we released in 2022, with changes to the LED & NTC connections to facilitate various lab activities in the course. That post has many technical details about the logger that have been omitted here for brevity, so it’s a good idea to read through that extensive background material when you have time.

Prepare the RTC module:

Clipping the Vcc supply leg (2nd leg in from the corner) puts the DS3231 into a low-power mode powered by backup battery, and also disables the 32k output.
Disconnect the indicator LED by removing its limit resistor.
Remove the 200Ω charging resistor, and bridge the Vcc via to the battery power trace at the black end of diode.

Cutting the VCC input leg forces the clock to run on VBAT which reduces the DS3231 chip constant current to less than 1µA, but that can spike as high as 550µA when the TCXO temperature reading occurs (every 64 seconds). The temp. conversions and the DS3231 battery standby current averages out to about 3µA so the RTC is responsible for most of the power used by this logger over time. If the time reads 2165/165/165 instead of the normal startup default of 2000/01/01 then the registers are bad and the RTC will not function. Bridging Vcc to Vbat means a 3.3V UART will drive some harmless reverse current through older coin cells while connected. DS3231-SN RTCs drift up to 61 seconds/year while -M chips drift up to 153 sec/year. If the RTCs temperature readings are off by more than the ±3°C spec. then the clocks will drift more than that.

It’s a good idea to do a breadboard test of those RTC modules (with the logger base-code) before assembling your logger.

Modify & Test the Pro Mini:

A Pro Mini style board continues as the heart of the logger, because they are still the cheapest low-power option for projects that don’t require heavy calculations.

Carefully clip the 2-leg side of the regulator with sharp side-snips and wobble it back and forth till it breaks the 3 legs on the other side.
Remove the limit resistor for the power indicator LED with a hot soldering iron tip.
Clip away the reset switch. This logger can only be started with serial commands via a UART connection.
Add 90° UART header pins and vertical pins on D2 to D6. Also add at least one analog input (here shown on A3). Students make fewer soldering errors when there are different headers on the two sides of the board for orientation.
Bend these pins inward at 45° and tin them for wire attachments later. D4 & D5 are used for the capacitive sensor lab.
Do not progress with the build until you have confirmed the ProMini has a working bootloader by loading the blink sketch onto it from the IDE.

Add the NTC/LDR Sensors & LED indictor

These components are optional, but provide opportunities for pulse width modulation and sensor calibration activities.

Join a 10k 3950 NTC thermistor, a 5528 LDR, a 330Ω resistor and a 0.1µF [104] ceramic capacitor. Then heat shrink the common soldered connection.
Thread these through D6=LDR, D7=NTC, D8=300Ω, and the cap connects to ground at the end of the Pro Mini module. Note that the D6/D7 connections could be any resistive sensors up to a maximum value of 65k ohms.
Solder the sensor cluster from the bottom side of the Pro Mini board and clip the tails flush with the board. Clean excess flux with alcohol & a cotton swab.

The way we read resistive sensors using digital pins is described in this post from 2019 although to reduce part count & soldering time in this e360 model we use the 30k internal pullup resistor on D8 as the reference value that the NTC and LDR get compared to. We have another post describing how to calibrate NTC thermistors in a classroom setting. Noise/Variation in the NTC temperature readings is ±0.015°C, so the line on a graph of rapid 1second readings is usually about 0.03°C thick. Range switching with two NTCs could also be done if the max / min resistance values of one thermistor can’t deliver the resolution you need.

Add a 1/8 watt 1kΩ limit resistor to the ground leg of a 5mm common cathode RGB led. Extend the red channel leg with ~5 cm of flexible jumper wire.
Insert Blue=D10, Green = D11, GND = D12. Solder these from the under side of the Pro Mini and clip the excess length flush to the board.
Bring the red channel wire over and solder it through D9. Note that if the RGB is not added, the default red LED on D13 can still be used as an indicator.

You can test which leg of an LED is which color with a cr2032 coin cell using the negative side of the battery on the ground leg. The LED color channels are soldered to ProMini pins R9-B10-G11 and a 1k limit resistor is added to the D12-GND connection to allow multi-color output via the PWM commands that those pins support.


Join the Two Modules via the I2C Bus:

Use legs of a scrap resistor to add jumpers to the I2C bus connections on A4 (SDA) and A5 (SCL). Trim any long tails from these wires left poking out from the top side of the ProMini.
Cover the wires with small diameter heat-shrink and bend them so they cross over each other. The most common build error is forgetting the to cross these wires.
Use another scrap resistor to extend the Vcc and GND lines vertically from the tails of the UART headers. This is the most challenging solder joint of the whole build.
Add a strip of double-sided foam tape across the chips on the RTC module and remove the protective backing.
Carefully thread the I2C jumpers though the RTC module.
Press the two modules together and check that the two boards are aligned.
Check that the two I2C jumpers are not accidentally contacting the header pins below, then solder all four wires into place on the RTC module.
Bend the GND wire to the outer edge of the module, and trim the excess from the SDA and SCL jumpers. Adding a capacitor to the power wires helps the coin cell handle brief loads:
Optional: Solder a 440µF [477A] to 1000µF [108J] tantalum capacitor to the VCC and GND wires. Clip away the excess wire.
Tin the four I2C headers on the RTC module and the SQW alarm output pin.
Join the RTC’s SQW output to the header pin on D2 with a short length of flexible jumper wire. At this point the logger core is complete and could operate as a stand-alone unit.
Bend the four I2C header pins up to 45 degrees.

As soon as you have the two modules together: connect the logger to a UART and run an I2C bus scanning program to make sure you have joined them properly. This should report the DS3231 at address 0x68, and the 4K EEprom at 0x57.


Add Rails & Breadboard Jumpers:

Clip the bottom posts away from two 25 tie-point mini breadboards.
Insert the breadboards in the rails. Depending on the tolerance of your 3D print, this may require more force and/or a deburring tool to make the hole larger.
Mount the breadboards flush with the upper surface of the rails. If the they are too loose in your print they can be secured quickly with a drop of hot glue or cyanoacrylate super-glue sprinkled with a little baking soda to act as an accelerant.
The 3D printed rails have a pocket cutout for the logger stack. The RTC module board should sit flush with the upper surface of the rail. Hot glue can be applied from the underside through the holes near the corners to hold the logger to the rails…
or thin zip ties or twisted wire can hold the logger stack in place. The legs of a scrap resistor can be used if the holes on your RTC module are too small for zips. (see 1:06 in the build video)
Check that the RTC pcb is flush in the pocket at opposite diagonal corners.
Cut two 14 cm lengths of 22AWG solid core wire. Insert stripped ends into the breadboards as shown, then route though the holes in the rail.
Secure the wires from the underside with a zip tie. Note: the ‘extra’ holes in the rail are used to secure small desiccant packs during deployment.
Route the solid core wires along the side of the breadboard and back out through the two inner holes near the logger stack.
The green wire should exit on the analog side of the Pro Mini and the blue wire should be on the digital side.
Route and trim the green wire to length for the A3 header.
Strip, tin and solder the wire to the A3 pin.
Repeat the process for the blue wire, connecting it to D3.
Extend the four I2C headers on the RTC module with 3cm solid core jumpers. Here, white is SDAta and yellow is SCLock.
Bend the jumpers into the breadboard contacts. Bmp280 and Bh1750 sensor modules usually require this crossover configuration.

A video covering the whole assembly process:

NOTE: For people with previous building experience we’ve also posted a 4 minute Rapid Review.

Code Overview: [posted on GitHub]

The base code requires the RocketScream LowPower.h library to put the logger to sleep between readings and this can be installed via the library manager in the IDE. In addition to the included NTC / LDR combination, the code has support for the BMP/E280, BH1750(lux), and PIR sensors although you will need to install libraries (via the IDEs library manager) for some of them. Sensors are added by uncommenting define statements at the beginning of the code. Each sensor enabled after the single-byte LowBat & RTCtemp defaults contributes two additional bytes per sampling event because every sensors output gets loaded into a 16-bit integer variable.

The basic sensors cover light, temperature, pressure and humidity – so you could teach an introductory enviro-sci course by enabling or disabling those sensors before each lab. Note: while the BME280 is quite good for indoor measurements where very high RH% occurs rarely; SHT30 or AM2315C sensors encapsulated in water resistant PTFE shells are better choices for long term weather stations.

Bmp280 outputs can be saved individually. Total bytes per sampling record must be 1, 2, 4, 8 or 16 ONLY. You may need to add or remove RTC temp or Current Battery to make the byte total correct for a new sensor.

But limiting this tool to only the pre-configured sensors would completely miss the point of an open source data logger project. So we’ve tried to make the process of modifying the base-code to support different sensors as straight forward as possible. Edits are required only in the places indicated by call-out numbers on the following flow diagrams. These sections are highlighted with comments labeled: STEP1, STEP2, STEP3, etc. so you can locate them with the find function in the IDE.

Those comments are also surrounded by rows of +++PLUS+++ symbols:
//++++++++++++++++++++++++++++++++++++++++++
//
STEP1 : #include libraries & Declare Variables HERE
//++++++++++++++++++++++++++++++++++++++++++

In Setup()

2024 note: Additional start-menu options have been added since this graphic was created in 2023, and there are a few additional debugging options that are not displayed unless serial output is enabled.

A UART connection is required to access the start-up menu through the serial monitor window in the IDE. This menu times-out after 8 minutes but the sequence can be re-entered at any time by closing and re-opening the serial monitor. This restarts the Pro Mini via a pulse sent from the UARTs DTR (data terminal ready) pin. The start-up menu should look similar to the screen shot below, although the options may change as new code updates get released:

If you see random characters in the serial window, you have the baud rate set incorrectly. Set the baud to 500,000 (with the pulldown menu on the lower right side of the serial monitor window) and the menu should display properly after you close & re-open the window. If you Ctrl-A & Ctrl-C to copy data from the serial monitor when the window still has garbled characters displayed, then only the bad starting characters will copy out. On a new logger: Hardware, Calibration & Deployment fields will display as a rows of question marks until you enter some text via each menu option.

The first menu option asks if you want to download data from the logger after which you can copy/paste everything from the serial window into a spreadsheet. Then, below the data tab in Excel, select Text to Columns to divide the data into separate columns at the comma separators. Or you can paste into a text editor and save a .csv file for import to other programs. While this transfer is a bit clunky, everyone already has the required cable and retrieval is driven by the logger itself. We still use the legacy 1.8.x version of the IDE, but you could also do this download with a generic serial terminal app. You can download the data without battery power once the logger is connected to a UART. However, you should only set the RTC after installing a battery, or the time will reset to 2000/01/01 00:00 when the UART is disconnected. No information is lost from the EEprom when you remove and replace a dead coin cell.

A Unix timestamp for each sensor reading is reconstructed during data retrieval by adding successive second-offsets to the first record time saved during startup. It is important that you download old data from a previous run before changing the sampling interval because the interval stored in memory is used for the calculation that reconstructs each records timestamp. This technique saves a significant amount of our limited memory and =(Unixtime/86400) + DATE(1970,1,1) converts those Unix timestamps into Excel’s date-time format. Valid sampling intervals must divide evenly into 60 and be less than 60. Short second-intervals are supported for rapid testing & debugging, but you must first enter 0 for the minutes before the seconds entry is requested. The unit will keep using the previous sampling interval until a new one is set. It helps to have a utility like Eleven Clock running so that you have HH:MM:SS displayed on your computer screen when setting the loggers clock.

The easiest way to measure the supplied voltage while the logger is connected to USB/UART power is at the metal springs in the Dupont cable.

Vref compensates for variations in the reference voltage inside the 328p processor. Adjusting that constant up or down by 400 raises/lowers the reported voltage by 1 millivolt. Adjust this by checking the voltage supplied by your UART with a multimeter while running the logger with #define logCurrentBattery enabled and serial output Toggled ON at a 1 second interval. Note the difference between the millivolts you actually measured and the battery voltage reported on the serial monitor and then multiply that by 400 to get the adjustment you need to make to the 1126400 value for vref. Restart and save this new number with the [ ] Change Vref menu option and repeat this procedure until the battery reading on screen matches what you are measuring with the DVM. This adjustment only needs to be done once as the number you enter is stored in the 328p EEprom for future use. Note that most loggers run fine with the default 1126400 vref although some units will shutdown early because they are under-reading. It’s rare to get two constants the same in a classroom of loggers so you can use student initials + vref as unique identifiers for each logger. If you do get a couple the same you can change the last two digits to to make unique serial numbers without affecting the readings. The battery readings have an internal resolution limit of 16 millivolts, so ±20mv is as close as you can get on screen.

After setting the time, the sampling interval, and other operating parameters, choosing [ ] START logging will require the user to enter an additional ‘start’ command. Only when that second ‘start’ confirmation is received does old data get erased by pre-loading every memory location in the EEprom with zero. A zero-trap is required on the first byte of each record because those preloaded zeros also serve as the End-Of-File markers later during download. (Note: If you leave the default LogLowestBattery enabled that is already done for you) LEDs then ‘flicker’ rapidly to indicate a synchronization delay while the logger waits to reach the first aligned sampling time so the code can progress from Setup() into the Main Loop().

In the main LOOP()

If all you do is enable sensors via defines at the start of the program you won’t have to deal with the code that stores the data. However to add a new sensor you will need to make changes to the I2C transaction that transfers those sensor readings into the EEprom (and to the sendData2Serial function that reads them back later). This involves dividing your sensor variables into 8-bit pieces and adding those bytes to the wire transfer buffer. This can be done with bit-math operations for long integers or via the lowByte & highByte macros for 16-bit integers. The general pattern when sending bytes to an I2C EEprom is:

Wire.beginTransmission(EEpromAddressonI2Cbus); // first byte in I2C buffer
Wire.write(highByte(memoryAddress)); // it takes two bytes to specify the
Wire.write(lowByte(memoryAddress)); // memory location inside the EEprom

loByte = lowByte(SensorReadingIntergerVariable);
Wire.write(loByte); // adds 1st byte of sensor data to wire buffer
hiByte = highByte(SensorReadingIntegerVariable);
Wire.write(hiByte); // adds 2nd byte of sensor data to the buffer

— add more Wire.write statements here as needed for your sensors —

The saved bytes must total of 1, 2, 4, 8 or 16 in each I2C transaction. Powers of Two byte increments are required because the # of bytes saved per sampling event must divide evenly into the physical page limit inside each EEprom, which is also a power of two in size. The code will display a warning on screen of bytesPerRecord is not a power of two.

Wire.endTransmission(); // Only when this command executes do the bytes accumulated in the wire buffer actually get sent to the EEprom.

The key insight here is that the wire library is only loading the bytes into a memory buffer until it reaches the Wire.endTransmission() command. So it does not matter how much time you spend adding (sensor variable) bytes to the transaction so long as you don’t start another I2C transaction while this one is in progress. Once that buffered data has been physically sent over the wires, the EEprom enters a self-timed writing sequence and the logger reads the rail voltage immediately after the write process begins. The only way to accurately gauge the state of a lithium battery is to check it while it is under this load.

NOTE: The data download function called in setup retrieves those separate bytes from the EEprom and concatenates them back into the original integer sensor readings for output on the serial monitor. So the sequence of operations in the sendData2Serial retrieval function must exactly match the order used in the main loop to load sensor bytes into the EEprom.


Adding Sensors to the Logger:

By default, the logger records the RTC temperature (#define logRTC_Temperature) at 0.25°C resolution and the battery voltage under load (#define logLowestBattery). These readings are compressed to only one byte each by scaling after subtracting a fixed ‘offset’ value. This allows about 2000 readings to be stored on the 4k (4096byte) EEprom which allows 20 days of operation at a 15-minute sampling interval.

A typical RTC temperature record from a logger installed into a cave early in the project. The datasheet spec is ±3° accuracy, but most are within ±0.5° near 25°C. If you do a simple y=Mx+B calibration against a trusted reference sensor, the RTC temperatures are very stable over time. The RTC updates it’s temperature register every 64 seconds so there is no benefit from reading it more frequently than once per minute.

That 4k fills more quickly if your sensors generate multiple 2-byte integers but larger 32k (AT24c256) EEproms can easily be added for longer running time. These can be found on eBay for ~$1 each and they work with the same code after you adjust the define statements for EEpromI2Caddr & EEbytesOfStorage at the start of the program.

This Bmp280 pressure sensor matches the connection pattern on this 32k EEprom module. So the two boards can be soldered onto the same set of double-length header pins.
Vertical stacking allows several I2C modules to fit inside the 50mL body tube. Any I2C sensor breakouts could be combined this way provided they have different bus addresses.

The pullup resistors on the sensor modules can usually be left in place as the logger will operate fine with a combined parallel resistance as low as 2.2k ohms. No matter what sensor you enable, always check that the total of all bytes stored per pass through the main loop is 1,2,4,8 or 16 or you will get a repeating data error when the bytes transmitted over the I2C bus cross a physical page boundary inside the EEprom. This leads to a wrap-around which over-writes data at the beginning of the memory block. Also note that with larger EEproms you may need to slow the serial communications down to only 250k BAUD to prevent the occasional character glitch that you sometimes see with long downloads at 500k.

Perhaps the most important thing to keep in mind is that breadboards connect to the module header pins via tiny little springs which are easily jiggled loose if you bump the logger. Small beads of hot glue can be used to lock sensor modules & wires into place on the breadboard area. ALSO add another drop can help secure the Cr2032 battery in place for outdoor deployments. Some sensors can handle a momentary disconnection but most I2C sensors require full re-initialization or they will not deliver any more data after a hard knock jiggles the battery contact spring. So handle the logger gently while it’s running – no tossing them in a backpack full of books! Many students make additional no-breadboard loggers with fully soldered connections to the sensor modules if their final projects require rough handling. It’s also a good idea to put 1 gram (or 2 half-gram) silica gell desiccant packs with color indicator beads inside the body tube for outdoor deployments. A change in the indicator bead color is the only way to know if moisture is somehow seeping in, potentially causing runtime problems or early shutdown.

The base code also includes the DIGITAL method we developed to read the NTC/LDR sensors. On this new build we used the internal pullup resistor on D8 as a reference to free up another digital pin. The blue jumper wire on D3 (the 2nd external interrupt) can wake the logger with high / low signals. This enables event timing and animal tracking. Pet behavior is a popular theme for final student projects.

The TTP233 can detect a press through 1-2mm of flat plastic but it does not read well through the curved surface of the tube. In open air it triggers when your finger is still 1cm away but the sensitivity can be reduced by adding a trimming capacitor.
The AM312 draws <15µA and has a detection range of ~5m through the centrifuge tube. This sensor has a relatively long 2-4 second reset time and will stay high continuously if it gets re-triggered in that time. Our codebase supports counting PIR detections OR using the PIR to wake the logger for other sensor readings instead of the standard RTC alarm.

These 0.49″ micro OLEDs sleep at 6µA and usually draw less than a milliamp displaying text at 50% contrast. However, like all OLEDs they send wicked charge-pump spikes onto the supply rails. A 220 or 440µF tantalum right next to them on the breadboard will suppress that noise. Sleep the ProMini while the pixels are turned on to lower the total current load on the battery.

These displays run about two weeks on a coin cell if you only turn them on briefly at 15 minute intervals, depending on contrast, pixel coverage, and display time. It might also be possible to completely depower them when not in use with a mosfet like the TN0702N3.
These OLEDs are driven by a 1306 so you can use standard libraries like Griemans SSD1306Ascii which can be installed via the library manager. However, the mini screens only display a weirdly located sub-sample of the controllers 1k memory – so you have to offset the X/Y origin points on your print statements accordingly.

While I2C sensors are fun, we should also mention the classics. It is often more memorable for students to see or hear a sensors output, and the serial plotter is especially useful for lessons about how noisy their laptop power supply is…

If you twist the legs 90°, a standard potentiometer fits perfectly into the 25 tie-point breadboard for ADC control of PWM rainbows on the RGB LED.
Light-theremin tones map onto the squawky little Piezo speaker and alligator clips make it easy to try a variety of metal objects in the Capacitive Sensing lab.

If you run a lab tethered to the UART for power, then your only limitation is the 30-50 milliamps that those chips normally provide. This is usually enough for environmental sensors although some GPS modules will exceed that capacity. If a high-drain GPS module or a infrared CO2 sensor is required then use those you will need one of the previous AA powered loggers from the project.

When running the logger in stand alone mode your sensors have to operate within the current limitations of the CR2032 coin cell. This means sensors should take readings below 2mA and support low-power sleep modes below 20µA (ideally < 2µA). Order 3.3v sensor modules without any regulators – otherwise the 662k LDO on most eBay sensor modules will increase logger sleep current by ~8µA due to back-feed leakage through the reg. Sensors without regulators usually have -3.3v specified in the name, so a GY-BME280-3.3v humidity sensor has no regulator, but most other BME280 modules will have regulators.

The best sensor libraries to choose should support three things: 1) one-shot readings that put the sensor into a low power sleep mode immediately after the readings 2) give you the ability to sleep the ProMini processor WHILE the sensor is generating those readings and 3) use integer mathematics for speed and a lower memory footprint. Many sensors can read at different resolutions using a technique called oversampling but creating high resolution (or low noise) readings with this method takes exponentially more power. So you want your library to let you set the sensor registers to capture only the resolution you need for your application. The library should also have some way set the I2C address to match your particular sensor module as most sensors support different addresses depending on which pin is pulled up. Always have a generic I2C bus scanning utility handy to check that the sensor is showing up on the bus at the expected address after you plug it into the breadboard (and restart the logger).


Logger Operation:

The logger usually draws peak currents near 3.3mA although this can be increased by sensors and OLEDs. The logger typically sleeps between 5 – 10µA with a sensor module attached. Four 5mA*30millisecond (CPU time) sensor readings per hour gives a maximum battery lifespan of about one year. So the logger is usually more limited by memory than the 100mAh available from a Cr2032. The tantalum rail-buffering capacitor only extends operating life about 20% under normal conditions, but it becomes more important with poor quality coin cells or in colder environments where the battery chemistry slows down:

A BMP280 sampling event with NO rail buffering capacitor draws a NEW coin cell voltage down about 50mv during the logging events…
…while the voltage on an OLD coin cell falls by almost 200 millivolts during that same event on the same logger – (again with NO rail buffer cap)
Adding a 1000µF [108j] tantalum rail buffer to that same OLD battery supports the coin cell, so the logging event now drops the voltage less than 20mV.

The code sleeps permanently when the battery reading falls below the value defined for systemShutdownVoltage which we usually set at 2850mv because many 328p chips trigger their internal brown-out detector at 2.77v. And the $1 I2C EEprom modules you get from eBay often have an operational limit at 2.7v. If you see noisy voltage curves there’s a good chance poor battery contact is adding resistance: secure the coincell with a drop of hot glue before deployment.

High & Low battery readings (in mv) from an e360 logging Pressure & Temp from a BMP280 @30min intervals to a 32k I2C EEprom (full in 84 days). This unit slept at 3.2µA and had a 1000uF tantalum rail capacitor. This kept battery droop below 50mv during the data save, stabilizing above 3000mv.
Same configuration & new Cr2032 at start, only this logger had NO rail capacitor and slept at only 2µA. Despite burning less power during sleep, the coin cell voltage droop during data saves was ~100mv, so the battery plateau is closer to the low battery shutdown at 2800mv.

When testing sleep current on a typical batch of student builds, some will seem to have anomalously high sleep currents in the 600-700µA range. Often that’s due to the RTC alarm being on (active low) which causes a constant drain through the 4k7 pullup resistor on SQW until the alarm gets turned off by the processor. Tantalum capacitors are somewhat heat sensitive, so beginners can damage them while soldering and in those cases they my turn into a short. A typical student logger should draw between 3-10µA when it is sleeping between sensor readings, and if they are consistently 10x that, replacing an overheated rail capacitor may bring that down to the expected sleep current. Also check that the ADC is disabled during sleep, as that will draw ~200µA if it is somehow left on. Occasionally you run into a ProMini clone with fake Atmel 328P chips that won’t go below ~100µA sleep no matter what you do. But even with a high 100µA sleep current logger, a new Cr2032 should run the logger for about a month measuring something like temperature, and 10 days with a high power demand sensor like the BME280. This is usually enough run time for data acquisition within your course schedule even if the high drain issue does not get resolved.

Occasionally you get an RTC module with a weak contact spring making the logger quite vulnerable to bumps disconnecting power. A small piece of double sided foam tape can be used to make the battery contact more secure before any outdoor deployments (although this is somewhat annoying to remove afterward). Note that on this unit hot glue was used to affix the logger to the printed rail instead of zip ties.

A few student loggers will still end up with hidden solder bridges that require a full rebuild. This can be emotionally traumatic for students until they realise how much easier the process goes the second time round. Once you’ve made a few, a full logger can usually be assembled in less than 1.5 hours. It’s even faster if you make them in batches so that you have multiple units for testing at the same time. Order a box of 200 cheap coin cell batteries before running a course, because if a student accidentally leaves a logger running blink (as opposed to the logger code) it will drain a battery flat in a couple of hours. This happens frequently.

Measuring tiny sleep currents in the µA range can not be done easily with cheap multimeters because they burden to whole circuit with extra resistance. So you need a special tool like the uCurrent GOLD multimeter adapter (~$75) or the Current Ranger from LowPowerLab (~$150 w screen & battery). The only commercial device for measuring tiny currents that’s remotely affordable is the AltoNovus NanoRanger, but that’s more than twice the price of the µC or the Ranger. You can also do the job with an old oscilloscope if you have that skillset. Again, sleep current is more of a diagnostic tool so you can usually run a course without measuring the sleep currents by simply running the loggers for a week between labs while recording the battery voltage. If that graph plateaus (after a week of running) near 3v and just stays there for a long time – your logger is probably sleeping at the expected 3-5µA.


Running the labs:

The basic two module combination in this logger (without any additional sensors) can log temperature from the RTC. This ±0.25°C resolution record enables many interesting temperature-related experiments, for example:

  • Log the temperature inside your refrigerator for 24 hours to establish a baseline
  • Defrost your freezer and/or clean the coils at the back of the machine
  • Log the temperature again for 24 hours after the change
  • Calculate the electric power saved by comparing the compressor run time (ie: while the temperature is falling) before & after the change

Note that the housing, the air inside it, and the thermal inertia of the module stack, result in ~5 to 10 minutes lag behind temperature changes outside the logger.

The small form factor of the e360 enables other benchtop exercises like this Mason jar experiment. The loggers are sealed in a cold jar taken directly from the freezer with a BMP280 sampling pressure & temp. every 15 seconds. The jars are then placed in front of a fan which brings them to room temp in ~45 minutes. For a comparison dataset, this experiment can also be done in reverse – sealing the loggers in room temperature jars which are then cooled down in a freezer; although that process takes longer.

Through no fault of their own, students usually have no idea what messy real-world data looks like, and many have not used spreadsheets before. So you will need to provide both good and bad example templates for everything, but that’s easy enough if you ran the experiment a dozen times yourself at the debugging stage.

Even then students will find creative ways to generate strange results: by using a cactus for the evapotranspiration experiment or attempting the light sensor calibration in a room that never rises beyond 100 Lux. Deployment protocols (Sensor Placement, etc.) are an important part of any environmental monitoring course, and ‘unusable data’ ( though the logger was working ) is the most common project failure. It is critical that students download and send graphs of the data they’ve captured frequently for feedback before their project is due. Without that deliverable, they will wait until the hour before a major assignment is due before discovering that their first (and sometimes only) data capturing run didn’t work. This data visualization is required for ‘pre-processing’ steps like the synchronization of different time series and for the identification of measurements from periods where the device was somehow compromised. Your grading rubric has to be focused on effort and understanding rather than numerical results, because the learning goals can still be achieved if they realize where things went wrong.

The temperature readings have serious lag issues while the pressure readings do not. A good lesson in thinking critically about the physical aspects of a system before trusting a sensor. With the built-in 4096 byte EEprom, saving all three 2-byte Bmp280 outputs (temp, pressure & altitude) plus two more bytes for RTCtemp & battery, gives you room for 512 of those 8-byte records. If you sample every fifteen seconds, the logger will run for two hours before the RTC’s 4k memory is full. The test shown above was done with a ‘naked’ logger, but when the loggers are used with inside the centrifuge body tube the enclosed temperature sensors have about 15 minutes of thermal lag behind changes in air temperature outside the tube.

Try to get your students into the habit of doing a ‘fast burn’ check whenever the logger is about to be deployed: Set the logger to a 1-second interval and then run it tethered to the Uart for 20-30 seconds (with serial on) . Then restart the serial monitor window to download those few records to look at the data. This little test catches 90% of the code errors before deployment.


Important things to know:

Time: You need to start ordering parts at least three months ahead of time. If a part cost $1 or less, then order 5x as many of them as you think you need. Technical labs take a week to write, and another week for debugging. You can expect to spend at least an hour testing components before each lab. The actual amount of prep also depends on the capabilities of your student cohort, and years of remote classes during COVID lowered that bar a lot. Have several spare ‘known good’ loggers (that you built yourself) on hand to loan out so hardware issues don’t prevent students from progressing through the lab sequence while they trouble-shoot their own builds. Using multi-colored breadboards on those loaners makes them easy to identify later. Measuring logger sleep current with a DSO 138 scope or a Current Ranger will spot most hardware related problems early, but students don’t really get enough runtime in a single course to escape the bathtub curve of new part failures.

Yes, some of that student soldering will be pretty grim. But my first kick at the can all those years ago wasn’t much better and they improve rapidly with practice. As long as the intended electrical contact is made without sideways bridges, the logger will still operate.

Money: Navigating your schools purchasing system is probably an exercise in skill, luck and patience at the best of times. Think you can push through dozens of orders for cheap electronic modules from eBay or Amazon? Fuhgeddaboudit! We have covered more than half of the material costs out of pocket since the beginning of this adventure, and you’ll hear that same story from STEM instructors everywhere. If you can convince your school to get a class set of soldering irons, Panavise Jr. 201s, multimeters, and perhaps a 3D printer with some workshop supplies, then you are doing great. Just be ready for the fact that all 3D printers require maintenance, and the reason we still use crappy Ender3V2’s is that there’s no part on them that can’t be replaced for less than $20. We bought nice multi-meters at the beginning of this adventure but they all got broken, or grew legs long before we got enough course runs with them. We now use cheap DT830’s and design the labs around it’s burden-voltage limitations. Small tools like 30-20AWG wire-strippers and side-snips should be considered consumables as few of them survive to see a second class. Cheap soldering irons can now be found for ~$5 (which is less than tip replacement on a Hakko!) and no matter which irons you get the students will run the tips dry frequently. The up side of designing a course around the minimum functional tools is that you can just give an entire set to any students who want to continue on their own after the course. That pays dividends later that are worth far more than one years budget.

An inexpensive BMP280 can be used as a temperature reference for the thermistor calibration lab. At a 1 minute interval the logger will run for 16 hours before the 4K EEprom is full. The logger should remain in each of the three water baths for 8 hours to stabilize. Stainless steel washers keep the logger submerged.
With insulated lunch box containers, the 0°C bath develops a nice rind of ice after being left in the freezer over night. No reference sensor is needed for the ice point because that is a physical constant.

All that probably sounds a bit grim, but the last thing we want is for instructors to bite off more than they can chew. Every stage in a new course project will take 2-3x longer than you initially think! So it’s a good idea to noodle with these loggers for a few months before you are ready to integrate them into your courses. Not because any of it is particularly difficult, but because it will take some time before you realize the many different ways this versatile tool can be used. Never try to teach a technical lab that you haven’t successfully done yourself a few times.

A good general approach to testing any DIY build is to check them on a doubling schedule: start with tethered tests reporting via the serial monitor, then initial stand-alone tests at 1,2,4 & 8 hours till you reach a successful overnight run. Follow this by downloads after 1,2 ,4 & 8 days. On the research side of the project, we do fast (seconds) sample-interval runs to full memory shutdown several times, over a couple of weeks of testing, before loggers are considered ready to deploy in the real world. Even then we deploy 2-3 loggers for each type of measurement to provide further guarantee of capturing the data. In addition to data integrity, smooth battery burn-down curves during these tests are an excellent predictor of logger reliability, but to use that information you need to be running several identical machines at the same time and start them all with the same fresh batteries so you can compare the graphs to each other. A summer climate station project with five to ten units running in your home or back yard is a great way to start and, if you do invest that time, it really is worth it.

Common coding errors like mishandled variables usually generate repeating patterns of errors in the data. Random processor freezing is usually hardware/timing related and the best way to spot the problematic code is to run with ‘logger progressed to Point A/B/C/etc’ comments printed to the serial monitor. In stand-alone mode you can turn on different indicator LED color patterns for different sections. Then when the processor locks up you can just look at the LEDs and know approximately where the problem occurred in your code.


Last Word:

The build lab at the beginning of the course – with everybody still smiling because they still have no idea what they are in for. The course is offered to earth & environmental science students. Reviewing Arduino based publications shows that hydrologists & biologists are by far the largest users of Open Science Hardware in actual research – not engineers! Traditional data sources rarely provide sufficient spatiotemporal resolution to characterize the relationships between environments and the response of organisms.

Why did we spend ten years developing a DIY logger when the market is already heaving with IOT sensors transmitting to AI back-end servers? Because the only ‘learning system’ that matters to a field researcher is the one between your ears. Educational products using pre-written software are usually polished plug-and-play devices, but the last thing you want in higher education is something that black-boxes data acquisition to the point that learners are merely users. While companies boast that students can take readings without hassle and pay attention only to the essential concepts of an experiment, that has never been how things work in the real world. Trouble shooting by process-of-elimination, combined with modest repair skills often makes the difference between a fieldwork disaster and resounding success. So sanitized equipment that generates uncritically trusted numbers isn’t compatible with problem-based learning. Another contrast is the sense of ownership & accomplishment that becomes clear when you realize how many students gave their loggers names and displayed them proudly in their dorm after the course. That’s not something you can buy off a shelf.



References & Links:
Waterproofing your Electronics Project
Successful field measurements when logging stream temperatures
Examples of soil temperature profiles from Iowa Environmental Mesonet
Oregon Embedded Battery Life Calculator & our Cr2032 battery tests
A practical method for calibrating NTC thermistors
Winners of the Molex Experimenting with Thermistors design challenge
Sensor Response Time Calculator (τ 63.2%)
Calibrating a BH1750 Lux Sensor to Measure PAR
How to Normalize a Set of BMP280 Pressure Sensors
How to test and Calibrate Humidity Sensors (RH sensors are not reliable outdoors)
Setting Accurate Logger time with a GPS & Calibrating your RTC

And: We made a Classroom Starter Kit part list back in 2016 when we were still teaching with UNOs and a post of Ideas for your Arduino STEM Curriculum. Those posts are now terribly out of date, but probably still worth a read to give you a sense of things you might want to think about when getting a new course off the ground. Those old lists also predate our adoption of 3D printing, so I will try post updated versions soon. The bottom line is that if you are going to own a 3D printer you should expect to have to completely tear down the print head and rebuild it once or twice per year. While they are the bottom of the market in terms of noise & speed, every possible repair for old Enders is cheap and relatively easy with all the how-to videos on YouTube. Those videos are your manual because Creality broke the cost barrier to mass market adoption by shipping products that were only 90% finished, with bad documentation, and then simply waiting for the open source hardware community to come up with the needed bug-fixes and performance improvements. X/Y models are more robust than bed-slingers, but our Ender 3v2’s have been reliable workhorses provided you stick to PLA filament. If you want a fast turn-key solution then your best option is one of the Bambu Labs printers like the A1 because they are essentially plug&play appliances. But if you enjoy tinkering as a way of learning, that’s Ender’s bread & butter. A bed level touch sensor is -by far- the best way to get that critical ‘first layer’ of your print to stick to the build plate and the E3V2 upgraded with a CRtouch & flashed to MriscoC is an OK beginners rig. Our local computer stores always seem to brand new 3v2’s for about $100 as a new customer deal but these days I buy cheap ‘for parts’ Ender5 S1’s from eBay for about that price and fix them up because 120mm/s is about the minimum speed I have the patience for. Now that multi-color machines are popular, what used to be high-end single-color machines are being discounted heavily so even brand new E5S1’s are selling for about $200 in 2025 if you look around. Having two printers means a jam doesn’t bring production to halt just before fieldwork, and by the time you get three printers the speed issues disappear and you can dedicate machines to different filaments. Keep in mind that no matter how fast your printer is capable of moving the print head, materials like TPU and PETG will still force you to print at much slower rates if you want clean prints with tight tolerances. To reduce noise, I usually put the printer onto a grey 16x16inch cement paver with some foam or squash ball feet under it.

Printer jams like this are inevitable. But if you don’t manage to fix it, a pre-assembled replacement hot end for an Ender3v2 is only $10. I find that it’s better to have multiple slower printers than one high-end machine because then a single jam or failure doesn’t take out all of your production capacity just before a fieldwork trip. Functional prints rarely have to be pretty – so you can speed up production with lower quality settings.

Avoid shiny or multi-color silk filaments for functional prints as they are generally more brittle and crack easily. Prints also get more brittle as they absorb humidity from the air. If that happens, cyanoacrylate glue + baking soda can be used for quick field repairs. It’s worth the extra buck or two to get filaments labeled as PLA pro as they usually have better durability in trade for slightly higher printing temperature (as long as the company was not lying about their formulation). I use a food dehydrator I bought for $10 from a local charity shop to dry out my PLA or PETG filaments if they have been open for more than a month. Really hygroscopic filaments (like PVA, TPU or Nylon) have to be dried overnight before every single print. Most machines work fine with the defaults, but you can get great prints out of any printer provided the bed is leveled, the e-steps & flow are calibrated, and the slicer settings are tuned for the filament you are using. If you are using Cura there is a plugin called AutoTowers Generator that makes special test prints. You print a set of those (for temperature, flow rate and then retraction) and set your slicer settings to match the place on the tower where the print looks best. You may have to do this for each brand of filament as they can have quite different properties. I rarely use filaments that cost more than $15/kg roll because functional prints don’t have to be that pretty and you will be tweaking each design at least 10 times before you iterate that new idea to perfection. I stock up on basic black/white/grey on Amazon sale days for only $10/roll. Once you get the hang of it, most designs can use 45° angles and sequential bridging to print without any supports. Conformal coating or a layer of clear nail polish prevents marker ink from bleeding into the print layers when you number your deployment stations.

Our Lux-to-PAR calibration post is a good example of the physical problems that have to be solved before you can collect good data: A printed shield was necessary for to keep the temp sensors cool under direct sun and a ball joint was required for leveling. A ground spike was needed for the mounting. I have started bumping up against the limits of what you can do in Tinkercad with some of our more complex designs, but OnShape is there waiting in the wings when I’m ready.

As a researcher, becoming functional with Tinkercad and 3D printing is a better investment of your time than learning how to design custom PCBs because it solves a much larger number of physical issues related to unique experiments and deployment conditions. That said, it’s not usually worth your time to design things from scratch if you can buy them off the shelf. So people get into 3D printing because what they need does not exist or is outrageously expensive for what it does. All FDM prints are porous to some extent because air bubbles get trapped between the lines when they are laid down – so they are not truly waterproof like injection molded parts unless you add extra treatment steps. UV-resistant clear coating sprays are a good idea for outdoor deployment. I don’t have much use for multi-color printing, but when they advance that to supporting multiple materials I will be interested because being able to print at TPU gasket inside a PETG housing would be quite useful.

If you are just getting started and looking for something to learn the ropes you can find useful add-ons to your printer on thingyverse or, you could try organizing your workspace with the Gridfinity system from Zack Freedman. You will also find lots of handy printable tools out there for your lab bench searching with the meta-engines like Yeggi or STLfinder. A final word of warning: this hobby always starts with one printer… then two… then… don’t tell your partner how much it cost.