Here the UNO supplies 5v to the (regulated 50mA) NEO6M and 3v to the RTC. The 5v UNO has no trouble responding to 3v signals from the Neo & RTC without level shifters. White wires to D11/D12 are GPS Rx&Tx via SoftwareSerial. The tantalum caps just stabilize the supply voltages which tend to be noisy on breadboards.
So far in 2024 we’ve released DIY calibration procedures for light, pressure & NTC temperature sensors that can be done without expensive reference equipment. Important techniques; but with long environmental time-series the 500 pound gorilla in the room is actually time itself. Precise timekeeping is something of a back-burner issue in the hobbyist world because so few are trying to combine data from multi-sensor arrays. And syncing to an NTP server is easy when you are already using wireless coms – provided that’s ‘good enough’ for your application. However for people who want high accuracy, using NTP is like setting your watch by constantly asking a friend what time it is.
An extension antenna is required or the GPS will never get a signal indoors. With that mounted in a skylight window theFIRST fixtook over 30 minutes to acquire. Subsequent fixes took two minutes or less. It will be interesting to see how this $15 combination performs when we service loggers at heavily forested field sites.
Reconstructing a sequence of events in a dynamic earth system is challenging when timestamps are generated by independent and unsynchronized clocks. But our loggers get deployed by the dozen in underwater caves, and even when you are above ground wireless isn’t really a long term option on a single Cr2032. For stand-alone operation we rely on the DS3231 RTC as they usually drift about 30 seconds per year – but there are always a few outliers in each batch that exceed the datasheet spec of about 5 seconds per month (±2ppm) for the -SN and 13 sec/m (±5ppm) for -M (MEMS) chips. These outliers are hard to spot with our usual ‘by hand’ time-setting process over the serial monitor. You can get a set of loggers within 80 milliseconds of each other with that method, but that difference is still annoyingly visible when the LEDs pip. That set me hunting for a better solution.
Paul Stofregens TimeSerial is often suggested but TimeLib.h didn’t play well with the RTC functions already in our code base. Even after sorting that out, and installing Processing to run SyncArduinoClock, I was still initiating the process. So TimeSerial didn’t get me any closer to the perfectly synchronized LEDs I was after.
This NEO-6M module doesn’t have a PPS header, but the indicator LED is driven by the time-pulse to indicate sync status. This provides one pulse per second, synced at the rising edge, for 100msec. Soldering a jumper to the limit resistor lets you to bring that signal over to the UNO with a male Dupont header pin.
SergeBre’s SynchroTime seemed like an all-in-one solution. But even after a decade working with the Arduino IDE, I still made every newbie mistake possible trying to compile that C code for windows. There are simply too many possible editor/ compiler/ plugin combinations to sift through without a lot of mistaken installations, removals & re-installs. I wasted a couple of days before realizing that code was packaged for the QT environment, and when I saw the additional cost I finally had enough. In the end, it took me less time to build my own GPS time-sync code than I spent trying to compile SynchroTime. That’s an important lesson in the difference between something that’s technically open source and a useable solution. Of course I can’t write that without wondering how many feel the same way about this project.
Ds3231 Datasheet pg7: In the ideal case there is no better option than leaving the offset at zero. However, many chips in circulation don’t match this spec: especially the -M chips which can require offsets of (-)40 or more to match a GPS pulse at room temperature. Most of the M’s are slow, needing adjustments from (-)20 to (-)30 at room temp, while most SN’s are usually between -10 and 0, with a few SN’s running fast.
Despite those setbacks, I couldn’t give up this quest knowing that HeyPete had achieved a drift of only 26 seconds with DS3231 offline for 3 years. The key to that spectacular result was tuning the Aging Offset Register before the run. Positive values in this register add capacitance to an array, slowing the oscillator frequency while negative values remove capacitance and increase the main oscillator frequency. The change is different at different temperatures but at 25°C, one LSB adjusts by approximately 0.1ppm(SN) or 0.12ppm(M). The exact sensitivity is also affected by voltage and age so it can only be determined for a given chip empirically. The datasheets also warn not to run noisy PCB traces under the RTC that might induce capacitive/coupling effects on the crystal but many modules on the market ignore this. My ‘rule of thumb’ when servicing loggers in the field is that changing the aging register by ±3 will correct approximately 1 second of clock drift per month when I don’t have access to the resources described in this post. Of course that requires you to have good field notes so you can be sure when the logger’s clock was last set.
In April 2023, ShermanPproposed a method to do this using successive approximations via the Arduino system clock. After considering, and then rejecting, NTP & WWVB, he settled on GPS as the best source and then posted a ready-to-run solution on his GitHub repo: https://github.com/gbhug5a/DS3231-Aging-GPS
Before proceeding further, read the PDF to understand how his code works. The key idea is that “In calibrating the optimum Aging setting, you should duplicate as nearly as possible the power supply voltage the RTC will experience in its application.” although I suspect this is less critical for the crystal based -SN chips than for the MEMS oscillators. Unfortunately battery voltages change significantly with temp. so matching the rail implies you are also doing this RTC adjustment at temps near your expected deployment ranges – which may not be possible. The Cr2032s that power our loggers spend most of their operating life at 3.05v, and the power indicator LED on raw RTC modules pulls enough current that it drops the UNO’s 3.3 volt line down to about 3v. Finished loggers draw only 1-2μA while sleeping so for those I have to add a Schottky 1N5817 inline to drop that supply down to 3.05v during the test.
The rig can be used to test single RTC modules… [this photo shows the GPS PPS connected to D3]
or finished loggers – provided – you put the ProMini to sleep, or load it with blink so it ignores the I2C traffic sent from the UNO. So we can do these tests & adjustments to loggers at any time after they have gone into service.
Sherman’s code uses the Interrupt Capture Unit so the PPS signal from the GPS must be connected to D8. I put a male Dupont pin on the end of the PPS tapso the UNO connection can be moved easily as the other code in this post requires that connected to D3. When testing the RTC inside assembled loggers, I have to use an alligator clip ( in green above ) for the alarm line which already has a soldered wire connection – so a female Dupont will not fit over that header pin.
It usually takes 20-30 minutes for the adjustment on a -SN chip to reach a stable value, or settle into a pattern toggling the last bit up and down:
Each cycle of Sherman’s code shown above takes five minutes. The test tends to work better when the RTC time is close to actual GPS time, however the test changes the RTC time during the cal process. So you will need re-sync your RTC time to the GPS after the Age offset determination is run. In our version, I’ve added RTC temperature and tweaked the formatting so that it’s easier to graph the output from longer tests. But these are trivial changes.
On this (typical) -M RTC, it took an hour before the 5-minute cycles settled to a stable offset. Later runs of this unit with calcSlope() showed slightly better behavior with a value of -17 but this test might have settled there if I’d left it running longer. Averaging values from the second hour might be the best approach and you want stable temperatures so the RTC isn’t doing TCXO corrections during the test.
Unfortunately the DS3231 has no non volatile memory which means all registers will reset whenever the chip loses power. So I write the optimum offset from this test on the modules battery holder with a white paint marker during the component triage I do before building new loggers. About 2/3 of time I get very similar results when running this test on a completed build that I got from the RTC module by itself before the logger was assembled. However for about 1/3 of the RTCs, forcing the chip to run in low-power mode from Vbat slows down the main oscillator speed up to 2ppm – so the only safe approach is to retest the age register test after assembly. The BBSQW battery power alarm enable (bit 6 of the 0x0E register) must be set when running the RTC on Vbat.
Many have speculated that there are ‘fake’ DS3231 chips in circulation, but with so many showing scratches & scuff marks I suspect the bad chips are actually the result of rough handling during rework/recycling. And with a chip that’s been in production this long, some are bound to be decades old.
SN chips usually settle to a stable offset value within 2-3 cycles but it can take an hour or more before the -M chips produce stable age offset results. About one in 40 of the RTC modules does not settle to a consistent number no matter how long you leave it and I toss those as defective. Occasionally this is because even with the Age register pushed all the way to a max value (±127) the RTC still can not match the GPS pulse. Some of the non-calibratable units have a non functional register – you can write a value to the register and read it back but that has no effect on the output. I suspect that many of these failures are due impact damage after the chip has been dropped. I also reject any RTC where the temperature register is off by more than 3°C because they won’t be able to do the TCXO corrections. The aging register and the temperature adjustments get combined at the load capacitor bank to tweak the main oscillator, so aging register changes won’t get applied until the next (64 sec) temperature correction unless you also trigger a manual conversion. Just by chance, about one in 25 of the -SN chips keeps almost perfect time compared to the GPS with the register left at the zero default. For now I’m keeping those aside as secondary reference units.
Units per Bin VS Aging offset to match GPS pulse: 140(M) & 85(SN) RTC modules. These were selected at random from multiple eBay vendors, and tested as unmodified modules powered through Vcc at 3.05v. Six of the SN’s had zero offset and I divided those equally into the ±5 bins. Interestingly, while the SN’s are much better behaved as a group, that chip type also had the most extreme outliers with about ten either DOA/unstable or requiring extreme adjustment. I suspect crystal damage explains this observation as there was only one DOA in the batch of M chips.
To leave room for a decent range of TCXO correction, and with ±2ppm short-term wander (on the Mems chips) the aging register should only be used to compensate for about 6-7 ppm of baseline offset. I try not to use a module where the aging register correction to match the GPS is more than ±50.
Step 2: Synchronize RTC time with a Neo6M GPS
Most clock projects use NTP, but there are a few that go that extra mile and synchronize to GPS. One that caught my attention was: Super-Accurate GPS-Corrected RTC Clock – without Internet NTP He avoided the serial bus latency of those pokey 9600 baud coms by preloading variables with GPS time+1secondand then waiting for the next GPS pulsebefore setting the RTC registers. With this concept in hand, and TinyGPS++ to parse the NMEA strings, it didn’t take long to whip up my own version for our loggers. It’s worth noting that several forums mentioned NMEA messages can exceed the 64byte buffer in SoftwareSerial so I increased this to 128 bytes by editing the file at: C:\Program Files (x86) \Arduino \hardware \arduino \avr \libraries \SoftwareSerial \src
Another hidden gotcha is that GPS time can be out by 2 or 3 seconds until it receivesa ‘leap seconds’ update which is sent with the Almanac every 12.5 minutes. So wait until the sync LED has been blinking for 15 minutes before setting your clock time as I don’t (yet) have an error catch for this. Our RTC time-sync code displays how much adjustment was done to the RTC time and checks the latency between the GPS pulse and the new RTC time immediately after the sync. That difference is often less than 30 microseconds, but it increases from there if you leave the system running:
Note: If you just ran the aging register test in Step1 you will need to move the PPS signal jumper from D8 to D3 before runningRTC2_SyncDS3231Time2GPS The RTC alarm output stays on D2. Occasionally the process spans a transition, so if you see the RTC seconds at anything other than GPSsec+1, de-power the rtc and run it again. The RTCs internal countdown only restarts if the seconds register gets changed, so multiple runs will not reduce the lag once the time has been synced. For some reason I haven’t identified yet, the 1Hz output from -M chips often ends about 1 millisecond before the GPS pulse, producing a negative lag value after sync (because the first ‘partial’ interval is too short?)
You still have to edit the code by hand for your specific local-time adjustment but everything is well commented. Most scientists run their loggers on UTC which matches GPS default time so that local time tweak can be commented out.
The external antenna connection was pretty flakey until I secured it with hot glue.
One small issue with having to run these test utilities with the RTC at 3.05v is that you’ll need to change the battery before deploying the logger. To preserve the clock time, connect the logger to a UART so the RTC is powered continuously during any battery swaps. After the time-sync & new battery, the normal procedure is to load a logger with its deployment code which has a start menu option to set the RTCs aging offset. This gets saved into the 1k EEprom on 328p processor and once set, the base-code automatically reloads that value from the EE into the RTC’s aging register at each runtime startup. After that’s done the logger is ready to deploy – so Step 3 below is only for those who want to explore the DS3231 RTCs drift behavior in more detail.
Step 3: Testing and Verifying Clock Drift
Now that we have the aging offset, and the RTC is synced to GPS time, how do we verify what we’ve done?
HeyPete ran multi-unit drift tests on the same breadboard with all of the RTCs responding to a single I2C master. I’m tempted to try this approach to drift testing of the other sensors like the BMP280, or the BH1750 although I might need to add a TCA9548 I2C multiplexer.
One method is simply to run the clocks until the drift can be easily measured – but that can take several months. You can get immediate results by enabling the 32kHz output on a DS3231-SN and comparing that to a high accuracy source with an oscilloscope. Ideally, you calibrate to a traceable standard which is at least one decimal place better than your device resolution. Kerry Wong did this with an HP 5350B Microwave Counter and HeyPete uses a Trimble ThunderBolt Timing GPS. There are a few retired engineers out there with universal counters on the bench and for truly dedicated ‘time-nuts‘ only an atomic clock will do. But even then the times from several must be averaged to arrive at a published value, and whenever you achieve better numbers by averaging multiple measurements you obscure potential issues with jitter.
Even if we had that equipment budget, our loggers supply the DS3231 from Vbat to save runtime power which disables the 32kHz. And -M chips don’t support that temp. compensated output no matter how they are powered. So is there any validation test that can be done without expensive kit or the 32khz line?
Actually there is – thanks to the Needle Nose Pliers blog in Tokyo. He developed a method that uses least squares over one minute of aggregated readings to resolve rates of change below 0.02μs/second despite the fact that an UNO sysclock only ticks at 4μs. I wrapped his calcSlope() function with modifications needed for the UNO/NEO6 test rig used here and added an input to change the Aging register before each run. To run the drift checking code from our Github, connect the GPS PPS to D3, and RTC SQW to D2:
Note: drift in ms will increase over time and the ppm values typically vary by ±0.02 (or more for -M chips). 9.999 indicates that the code is still collecting the initial 60 readings required for slope calculation. It usually takes another minute after that for the ppm readings to settle. The 1-second cycle-count lets you know the test duration if you leave a long test running in the background.Once the initial 60 readings are gathered, the ppm drift calculation can be done. In real world terms, ±2ppm is about 175msec of drift per dayand you can see that happening in real time with this output. In the test shown above I was deliberately using an offset far from the one suggested by the Step1 test to see how much that increased the drift rate.
That serial output can then be copied into a spreadsheet to compare the effect that different aging offsets have on the RTC. Here are the results from two five-hour tests of a DS3231-M; first with the the Age register set at zero and then with it set to -17. The clock time was sync’d to GPS time before each test to make the graphs easier to compare with the x axis is seconds: (click to enlarge)
RTC temp. during test —> Msec offset from GPS —> Drift Error PPM
RTC temp during this test: 22°C
Drift: 35msec slow after 5 hours
Average error: +2ppm (with ±2ppm jitter)
At the Age=0 default, this RTC’s 1Hz output was 35 milliseconds behind the GPS pulse after five hours, which would be roughly equivalent to 100 seconds of drift per year. The average error hovered around +2 ppm. This is well within spec for a -M chip as ±5ppm implies up to 155 seconds of drift per year.
Then the Aging register was set to -17 (as determined by the test from Step1) and the drift examination was done again. That same RTC module was now only 0.5 milliseconds behind the GPS pulse after five hours, with the slope-derived error averaging close to zero ppm:
Higher 23°C temps on 2nd test
Less than 1ms of drift in 5h
Average error: close to 0ppm (but jitter is the same ±2ppm)
So with the correct aging offset this -M chip could be expected drift less than a second per year. Of course this only applies near our Step1 testing temperature, but in general: If you found the best aging offset correction, the msec difference between a 1Hz alarm from the RTC and the GPS pulse should change very little over a short test.
Its worth noting there is ±2ppm of jitter in the calculation with that -M chip (above) that is not present for -SN chips. The -SN shown below had a straight linear drift of 20 milliseconds slow over five hours when its Aging register was left at the zero default (that’s about 35 seconds/year or 1ppm), but the same RTC had near zero milliseconds of drift over five hours when the aging offset was set to -21:
RTC temp rise during this test
Msec drift approaching zero after 5h, with TCXO adjustment.
Error ppm ~0, with very little jitter on this -SN chip
Error(ppm) vs Runtime(sec): This drift verification on a DS3231-M was done with an age offset of -33 from the Step1 test. The B term in the Excel linear trendline fit is less than the 0.12ppm/bit register adjustment, confirming that -33 is optimal for this chip. The absolute timing change over this 2.5h test was less than 1/2msec faster than the GPS pulse.
Even with temperature rising 3°C during the test, that -SN stays within a tighter tolerance than the -M. This difference in short-term variability explains why the offset determination settles so quickly with a -SN, but can wander around for some time with a -M. The code used here in Step3 is like a slow verbose version of what’s being done in step1 that shows all the intermediate readings. If you put a linear trendline on the graph of the error PPM from running this test with the offset left at the zero default, you can estimate how much age register adjustment it would take to shift those readings until the average is near zero. The aging offset suggested by the test in Step1 should be close to the result of dividing the ‘b’ term from the y=mx+b trendline fit equation by 0.1ppm(SN) or 0.12ppm(M), and changing the sign.
On his blog, NNP also demonstrated how the two chip variants have different responses to temperature changes:
Temp(°C) vs Time(sec): The RTC modules were exposed to this overall pattern although the final tests were run faster (blue = program, orange = PID actual)
Drift Error (ppm) vs Time (sec): The spikes are due to the fact that the TCXO corrections only occur every 64 seconds on the -N variant, but after that the chip quickly returns to its ±2ppm spec.
While still within the rated ±5ppm the -M shows greater variability. With the -M chips doing corrections every 10 seconds I’m surprised the overall TCXO response takes a longer.
This confirms what I already suspected from our data: the -SN chips are a much better choice for outdoor environments where temperatures vary over that entire 50°C range. Although the temperature coefficient of the MEMS oscillator is not specified in the datasheet, loggers built with -M chips are probably still fine for stable thermal environments and with a tuned Aging register I’d expect them to drift less than ten seconds per year indoors. There are other insights if you dig into NNP’s blog. For example, drift is also affected by the physical orientation of the chip with respect to gravity. I had no idea it was a problem for all quartz resonators unless the crystal is cut into a special shape to avoid it. This highlights the fact that with so many different factors affecting the RTC, the Aging offset adjustment will never be perfect; you are simply aiming to reduce the ‘average’ drift. These tests are also affected somewhat by the stability of the oscillator on the UNO so we have a chicken & egg thing there.
I will be doing more tests to see what other insights it can give into our ProMini / DS3231 combination. With the ability to synchronize clocks so precisely (in Step 2) you can see the outliers in a group in as little as 24 hourssimply by watching the LED blinks. I already do multiple rapid burn-in tests with new loggers as part of pre-deployment testing, so visually checking synchronization during those runs is low-effort way to verify the RTC. One thing I’ve long suspected, but have never seen any actual proof of, is that the process of updating registers and generating alarms also affects the clock time. Perhaps handling the I2C transaction blocks the update of some internal counter? I could test this by setting one logger in a ‘well matched’ group to wake every second, while the others blink at five seconds, and see how many days it takes for the fast blinker to shift out of alignment.
It would be interesting to get a couple of the newer DS3232 / DS3234 chips, and test how much they drift with their TXCO pushed out to 512 seconds for 1µA current, instead of the 64 second default that pushes the DS3231’s up to about 3µA average standby current.
Last Word
With these three tools to wrangle our time series we could see drift as low as five seconds per year from an SN in a stable environment, so our little 328p loggers can finally go toe-to-toe with all those smugly networked ESP32s. I will eventually combine these into a general RTC testing utility, but there are plenty of use cases for each as an isolated step – especially if you were tweaking the code for use with different RTC chips. Likewise, with the Neo being a 3v device I could add a few header pins to our loggers for the serial coms and run everything with the logger alone.
But I’m somewhat sentimental about the original UNOs, so it’s nice to dust them off once and a while. Another factor is that if you run two separate instances of the IDE you can choose a different com ports for each instance. So you can simultaneously have that UNO/GPS combination connected, and a ProMini logger connected via its own UART module. As long as you align the code open in each instance with the appropriate port, you can run those RTC tests on the UNO in the background while you work in the other instance of the IDE. This will be very handy when servicing loggers in the field. I will secure those field calibration rigs with hot glue and make them more compact with a ‘sit-on-top’ protoshield.
The quid pro quo when adjusting the Aging register is that the reduced drift within the tuning temperature range comes at the cost of increasing non-linearity at more extreme temperatures. But the underwater/cave sites we deploy into are quite stable compared to surface conditions, so it’s probably worth the trade. Physical aging rates are not necessarily constant or linear, so I expect that that register will need a yearly update. The first complete generation of fully sync’d & calibrated RTCs will get deployed this fall, so it will be a while before I can check how aging is changed by exposure to real world temperature variation. I’ll be happy if I can get -M’s below 1 second of drift per month under those conditions. I would hope to see the aging stabilize after the first year of operation, in a manner similar to sensor aging.
At the very least, we’ve greatly enhanced our ability to remove any duffers from those cheap eBay parts. I’m still wondering what new sensor possibilities better time discipline might enable but I can already see some interesting labs for the next cohort of e360 students. One of the more challenging things to demonstrate within the constraints of a classroom is the relationship between datasheet error specifications and sensor drift. I’ll set aside a few -M modules for those teaching loggers so the results are more dramatic.
Just adding a reminder here: the DS3231 doesn’t have a built-in mechanism to disable alarms after they’ve been set. You can clear the alarm flag to release SQW after it fires, but the alarm will still be armed and will fire again at the next time match – no matter how you set the ‘enable/disable’ bits. The ONLY way to disable alarms on the DS3231 is to load those registers with an ‘invalid’ h/m/s combination that the actual time can never reach (eg: one with minutes/seconds set to 62 or, date set to Feb 31st). You can also set the EOSC bit of the control register to logic 1 which stops the oscillator when the DS3231 is on VBAT power – but you will then be unable to check the clock drift at the next logger download. Halting internal oscillator is the only way to stop the temperature conversions.
From the data sheets you can see that the -M uses half as much power (about 26 milliAmpSeconds/day) as the -SN chip does (45 mAs/d) to do its TXCO corrections however our standard Promini/DS3231-SN module combination usually sleeps around 890 nA while the same logger built with a DS3231-M will sleep closer to 1680nA (when a temp compensation reading is not occurring). A sleeping 328p based ProMini draws ~150nA (reg removed & BOD off) and the 4K AT24c32 EEproms on the modules draw less than 50nA when not being accessed. So the -M chips have more than 2x the ~700nA Ibat timekeeping draw of -SN chips.
A 3d printed stack of radiation shields goes around the 30mL centrifuge tube housing our 2-module logger. A universal ball joint by DiZopfe was adapted for the leveling mechanism which is critical for the calibration.
Space nerds have an old saying that ‘LEO is half way to the moon…‘ and Arduino hobbyists tend to feel the same way about getting sensor readings displayed on a live IoT dashboard. But that ignores the real work it takes to generate data that’s actually useable. To paraphrase Heinlein: ‘Calibrationis half waytoanywhere…’ Now that our 2-Part logger is both easy for students to build and robust enough for field use, we can focus on developing sensor calibration methods that are achievable by teachers and researchers in low-resource settings.
Light sensors seem straight forward, with numerous how-to guides at Hackaday, Adafruit, Sparkfun, etc. In reality, light sensors are some of the trickiest ones to actually deploy – which is why so few low-end climate stations include them. This post describes a method for calibrating a Bh1750 lux sensor to estimate Photosynthetically Active Radiation (PAR). Not everyone can afford a LI-COR 190 or Apogee SQ quantum sensor to use as a benchmark, so here we will use a clear-sky model calculation for the cross-calibration despite the dynamic filtering effects of the atmosphere on natural sunlight. Using a diffuser to restore cosign behavior means we can’t calculate PPFD directly from Lux without some y=mx+b coefficients.
Peak solar irradiance received on any given day varies by latitude and season, as does the overall pattern. Light emitted from the sun has a stable distribution of frequencies, however the spectrum at the earth’s surface varies across the day, with more short wavelengths (blue) around mid day and enriched in longer wavelengths (red) at sunrise & sunset when the rays travel further through the atmosphere. We will avoid this source of error by calibrating with data from the hours around solar noon as determined by the NOAA Solar Calculator. Even with high quality sensors, morning and evening data can be compromised by other factors like condensation which changes the refractive index of lenses and diffusers.
Light Sensor Issue #2: Sensitivity Bands
Average plant response to light as Relative Photosynthetic Efficiency (%) vs Wavelength (nm) compared to Bh1750 Response Ratio vs Wavelength
Lux sensors have a maximum sensitivity near 550nm, mimicking the response of photo-receptors in the human eye. Plants are similarly limited to frequencies that can be absorbed by the various chlorophylls. These two bands have a high degree of overlap so we can avoid the Badder UV/IR-Cut cut filters (420–685nm bandpass) or stack of Roscolux filters that would be needed with photodiodes that respond to a wider range of incoming radiation. The cross-calibration still requires the relative ratio of frequencies within the targeted region to remain stable, so a PAR conversion derived under full sunlight may not be valid under a canopy of tree leaves or for the discontinuous spectra of ‘blurple’ grow-lights.
Light Sensor Issue #3: Dynamic Range
I tested two inexpensive Bh1750 sensor modules, and the diffuser dome that comes with the red ‘Light Ball’ version turned out to be the deciding factor. When powered from a 3v coin cell, these sensors add 8µA to the loggers sleep current if you leave the 622 reg in place and <1µA if you remove it.
Full summer sunlight can exceed 120,000 Lux and there aren’t many sensors in the Arduino ecosystem that handle that entire range. The BH1750 can with registers set to it’s least sensitive configuration. Our logger code already does this because QUALITY_LOW & MTREG_LOW(31) integration take only 16-24 milliseconds, rather than the 120-180ms of power needed for high resolution readings. The data sheet implies that the sensor will flatline before 100,000 lux, but at its lowest sensitivity it delivers reasonable data above 120k, though linearity may be suspect as flux approaches sensor saturation. The sensor also has a maximum operating temperature of 85°C which can be exceeded if your housing suffers too much heat gain. Alternative sensors like the MAX44009, TSL2591 and SI1145 have a similar thermal limits. Like most light sensors, the Bh1750 increases its output readings by a few percent as the sensor warms.
Commercial vs DIY diffusers. Bullseye level indicators are epoxied to the top shield with white JB Marine Weld. The larger 43mm diameter bubble (right) was far more effective than the smaller 15mm (left).
DIY builders often add diffusers made from translucent #7328 Acrylite or PTFEsheets to reduce sunlight intensity into a given sensors range. I tried printing domes with clear PETG and hand sanding them with fine grit to increase the diffusive power. While these did reduce light levels by more than 50%, my DIY diffuser didn’t quite match the smooth overall response seen with the diffusers that came with the round PCB modules. This may have been due to a slight misalignment between the sensor and the focal point of the low-poly dome I could make in Tinkercad. The white dome that comes with the red Bh1750 module reduced peak light levels in full sunlight from the 110k Lux reported by a ‘naked’ sensor to about 40k Lux. Each sensor varied somewhat in its response but I didn’t do any batch testing to quantify this as I was calibrating each sensor directly to the reference model. I initially tried clear JB weld as a sealant but this caused problems: sometimes contracting enough peel parts away from the PCB and yellowing significantly after a couple of weeks of full sun exposure. In later builds I used only a thin coating of silicone conformal, relying on an epoxy seal around the base of the diffuser to provide most of the waterproofing.
Light Sensor Issue #4: Angular Response
Bh1750 Directional Characteristics [Figs 4&5] from the datasheet. Sensor response is different on the two axes so orientation must be labeled on the outside during assembly. The left graph is closer to Lambertian so thesensor gets deployed with its connection pads oriented North – South relative to the suns east-west motion. Based on these curves alone we would expect a ‘naked’ BH sensor to under-report relative to the Lambertian ideal. That is indeed what I observed in our early sensor comparison tests, leading to our selection of round red PCB modules for the calibration because the included diffuser dome compensated nicely.
Lambert’s cosine law describes the relationship between the angle of incidence and the level of illuminance on a flat matte surface as being proportional to the cosine of the zenith angle (as the suns changes position throughout the day). At an incident angle of 60°, the number of photons hitting a sensor surface is half what it would be if the same light source was positioned directly above the sensor. This effect is mathematically predictable, but imperfections, diffraction, and surface reflection means that sensor response tends to diverge from ideal as the angle increases. So manufacturers surround their sensors with raised diffuser edges and recesses on the top surface which change light collection at low sun angles to restore a perfect cosign response. In general, diffusers make the compass orientation of the sensor less likely to interfere with calibration but leveling the sensor is still absolutely required.
Light Sensor Issue #5: Temporal Resolution
Unlike most environmental parameters, light levels can change instantaneously. Most commercial sensors aggregate 1 or 2 second readings into 5 to 15 minute averages. This makes it much easier to estimate energy output from solar panels, or calculate the Daily Light Integral for a crop because both of those use cases are more concerned with area under the curve rather than individual sensor readings. However, in our case of calibrating a sensor against an irradiance model, we must use instantaneous readings so we can exclude data from periods where the variability is high. Averaging would smooth over short term interference from clouds, birds, or overhead wires, potentially leading to bad data in the calibration. We read the BH1750 once per minute at its fastest/lowest resolution.
A Radiation Shield
My original concept was to epoxy the light sensor directly onto the cap and slide separate radiation shields over the housing tube with a friction fit – but that approach suffered excessive heat gain. It took several design iterations to discover that plastics are often transparent to IR – so most of the 3D printed weather station shields you find in maker space won’t work very well. While PLA does block/reflect the visible spectrum, it then re-emits a portion of any absorbed energy as IR which passes right through – turning the central housing tube into a tiny greenhouse. You need to add layers of metal foil to reflect that IR and there must be an air gapbetween the materials or the heat still crosses by conduction. The process of moving those surfaces away from the logger also meant placing the sensor onto a small raised ‘stage’ that could pass through the upper shield. This allows easier replacement after the sensors expire, or the use of an entirely different sensor without changing the rest of the design. I still don’t know the operating life of these sensors at full sunlight exposure levels.
2″ Aluminum HVAC tape is applied to the IR shield layer. (click to enlarge these photos)
The IR shield slides to about 8mm below the top shield which has holes along the rim to vent heated air.
The sensor stage slides on the vertical rails and passes through the upper shield.
The loggers green cap then pushes the sensor stage into place with a snug click-fit. Foil is wrapped around the logger housing tube.
Three smaller gill shields slide onto the rails, with plenty of aligned holes for vertical airflow through to the top shield.
Here are temperature records of two side-by-side loggers with identical 3D-printed shields except that one has the three metal foil layers added and one does not:
Temp (°C) vs Time: Comparison of heat gain with, and without metal foil layers. Measured with the NTC sensor inside the logger housing at the center of the stack. The night time data shows a 0.25°C offset between the two sensors, indicating that they were not normalized before this run.
Interestingly, the 3°C delta seen in my foil vs no foil tests matched the discrepancies identified by Terando et.al in their 2017 paper examining ad hoc Stevensonshields in ecological studies. Air gaps are required for IR reflecting layers to do their job, so most of the foil backed roofing shingles on the market are ineffective because of direct surface contact. Both aluminum and stainless steel foils are common, but aluminum has a lower emissivity than stainless steel, meaning it should reflect more and emit less IR. There are also radiant barrier coating sprays used in industrial settings. High-end weather stations use fan ventilation or helical shields, but those designs may be a bit too complicated for DIY. And even 3D prints from tough materials like PETG or ASA would benefit from coating with something like Krylon UV protectant to extend their lifespan. I’ve also been thinking about adding someinfrared cooling paint on the top surface of our weather stations. The challenge with anything that emits in the atmospheres transparency window between wave lengths of 8 and 13 microns is that you get significant accumulation of debris on surfaces in as little as one month of actual deployment: especially in the spring/fall when the surfaces get covered with morning dew which then captures any windborne dust.
I’m still tweaking the shield design as more test data comes in, and hope to compare it to a fan aspirated model soon. Radiation shields are only needed if you want to capture accurate temperatures with the light readings on the same logger. The Bh1750 calibration alone could be done without shields, but mounting the sensor on some kind of flat surface makes it easier to add the required leveling bubble beside the sensor. The tradeoff for preventing solar heat gain is that shields introduce lag in the temperature response.
Pole Mount & Leveling Mechanism
As this is the first of our ‘garden series’ that will be built around the 2-part logger, I created a complete mounting system from a combination of 3D printed parts and PVC pipes. This adjustable leveling mechanism was modified from the Open Source Universal Ball Joint posted on Thingiverse byArthur ZOPFE.
This socket slides over the end of a 1/2″ PVC pipe. A zip tie through the drilled cross-hole secures the pieces together.
A self standing 30mL centrifuge tube slides snugly into this fitting, again with holes for zip ties.
A large diameter twist ring makes it easy to adjust the sensor assembly while watching the bulls-eye level on the top shield.
This ball & socket approach works well for leveling, but to make the adjustments easier (ie. with less compressive force) I will add an O-ring to the bottom cup for some friction and give.
This ground spike has a foot plate to assist insertion and is asymmetric to provide more contact with the bed. It just barely fits on my Ender3 when printed diagonally. I created this model from scratch in Tinkercad, but the offset idea is not mine. Unfortunately, I saw the original so long ago I don’t know who to credit for it. The pole insert and holes are six-sided because internal 45° slopes can be printed without any supports, and you can simply bridge the internal 1cm top span.
A length of standard 1/2 inch PVC pipe is used for the riser between the spike and the leveling mechanism. Ideal height for temperature sensors is approximately five feet above the ground, usually in a shaded location facing away from the sun.
The Apogee Clear Sky Calculator
With this model we could even attempt a calibration against the shortwave spectrum for a DIY pyranometer, but it’s a much bigger stretch to say the 550nm peak of BH sensitivity is a good proxy for the whole 300 -1300nm band of frequencies.
The Apogee Clear Sky Calculator helps operators of their many light sensor products check if those need to be sent in for re-calibration. When used near solar noon on clear unpolluted days the accuracy is estimated to be ±4 %. We can cross-calibrate the readings from our humble Bh1750 to that model provided we use data from a cloudless day. I’m not sure what the temporal resolution of the ClearSky model is (?) The U.S. Climate Reference Network generally uses two-second readings averaged into five minute values so it is likely that the ClearkSky model has a similar resolution. This model has the best accuracy within one hour of solar noon but we will push that out a few hours so we have enough data for the regression.
We could have used the Bird Clear Sky Model from NREL, with validation against real world data from one of the localSURFRAD stations at NOAA. That data is for full-spectrum pyranometers measuring in W/m2, but you can estimate the PAR as photosynthetic photon flux density (PPFD) from total shortwave radiation using a conversion factor into µmol s-1 m-2. Many solar PV companies provide online calculators for power density that could also be used for this kind of DIY sensor calibration.
Our Deployment Location
Most who live in urban areas are familiar with noise pollution, however it is also hard to find undisturbed light environments. My best option for those critical hours around solar noon was my neighbours backyard garden:
The two sensors here are aligned on the east-west axis so they can be compared.
This location was relatively free of power lines and tree tops, but reflections from that white door caused a slight positive offset in the afternoon. Fences prevented the capture of morning and evening data which would have been interesting. But sunrise to sunset data is not required for our calibration.
The Calibration
After several weeks of logger operation we finally managed to capture data from a beautiful cloud-free day:
2024-07-27:Lux from a diffused ‘Light Ball’ Bh1750 sensor (Orange, left axis @1min) VS ClearSky Model PPFD (Purple/right Axis @ 5min). You can see some stair-stepping in the model data, indicating that it’s temporal resolution might be only 10-15 minutes.
We logged raw single-shot Lux readings at one minute intervals and because there is no averaging applied you can clearly see where overhead lines or birds created occasional short-duration shading. These outliers were excluded before generating the trendline shown below. The PAR values from the model were calculated using the ‘Auto fill’ option for humidity and temperature. On this day solar noon was at 12:57
Linear y=mx+b fit between ClearSkyCalculator PPFD (yAxis) vs Diffused BH1750 Lux (xAxis) using 5 minute data points on 2024-07-27 between 10:00 and 16:00 [bracketing solar noon by three hours]. Two shadow outliers at 10:05 and10:15am were excluded from the dataset.
Aerosols and variations in local temp/humidity produced some scatter but this is a good result for calibration with natural light. The result might be improved by co-deploying a humidity sensor, but it’s not clear to me if humidity at ground level is what the model actually uses for its calculation. Some scatter is also being created by the temporal resolution of the model. Using one type of sensor as a proxy for another limits the scope of the device and we probably approached an accuracy of ±15% at best with this conversion. It’s worth remembering that most commercial light sensors are only calibrated to ±5%.
Discussion
The biggest challenge at our mid-west location was that we had to run the loggers for several weeks before capturing the blue-sky day shown above. Typical time series from that Bh1750 sensor (under a light-reducing diffuser dome) looks like this:
Lux vs Time: 1 minute data captured with our 2-Part logger reading a red ‘light-ball’ Bh1750 module. This unit had an extra 64k EEprom added to store the large amount of data that was generated.
Clouds often cause light levels to exceed that seen on clear days. This makes sense if you imagine a situation where there are no clouds directly over-head, but radiation reflected from the sides of clouds is reaching the sensor from multiple directions. The fact that clouds at different atmospheric levels have different effects is one of the things that makes climate models so complicated.
The Clear-Sky Calculator lets you generate data for any date/time, so it would be possible to do this calibration by aggregating cloudless periods from multiple days:
Detail of data from 7/15 and 7/12: what you are looking for is the smooth curve that indicates there were no high level clouds causing subtle variations in light level.
Inexpensive (~$60USD) PAR meters have started appearing on Amazon recently. I’m more than a little dubious about the term ‘quantum’ in the marketing (?) as they are probably just a photodiode and some filters
Someone in Nevada would have no trouble gathering this kind of calibration data, but it might not be possible for people living in Washington. Alow-cost alternative to using a clear-sky model for the calibration could be to compare the Bh1750 to one of the many smartphone grow light meter apps, with a clip-on diffuser & cosine corrector. Every phone has a different sensor so programs like Photone or PPFDapp usually have their own calibration procedures. While developing this exercise I also found a ‘for parts’ Seaward Solar Survey 100 on eBay for $20, and all it needed to bring it back to life was a good cleaning inside. I also found an old Li-1400 loggerwith a 190 pyranometer for only $120 and was pleasantly surprised when Apogee’s calculator showed it was still within 5%. As mentioned, you’d need to convert total radiation from those last two into PAR or you could do the calibration to total shortwave. Hardware references that lack logging capability require more effort to gather calibration points, but they save you from having to wait for agreeable weather.
Other projects have built similar sensors and with calibration Lux sensors are comparable to commercial PAR sensors if the spectral environment is consistent. Multi-channel sensors with overlapping frequencies do a better job in situations with discontinuous light sources like those used for indoor growing or for measuring the extinction of PAR frequencies under water. In those cases a TCS3471 (3-channel), AS7341(10-channel), or AS7265 (18-channel) sensor could be used, and finer frequency division can enable calculation of interesting ratios like NDVI or SPAD. Beyond that point you’re entering the realm of diffraction grating spectrometers which allow a more nuanced approach to the spectral function which differs from standard PAR.
And if building your own datalogger is too challenging, you could reproduce the exercise described in this post with a bluetooth UNI-T or a UT381 Digital Luminometer which has some logging capability. But you will need to add extra diffusers to bring full sunlight down below its 20,000 Lux limit.
Once your project starts to grow it’s common to have multiple different sensors, from different vendors, measuring the same environmental parameter. Ideally, those sensors would produce the same readings but in practice there are significant offsets. Datasheets for the MS5837-02BA and MS5803-14BA that we will compare in this post claim an accuracy of (±0.5mbar) and (±2ºC) for the 2-bar while the 14-bar sensors are only rated to (±20mbar) and (±2ºC). Sensors from Measurement Specialties are directly code compatible so the units here were read with the same Over Sampling settings.
Barometric pressure from a set of nine MS58xx pressure sensors running on a bookshelf as part of normal burn-in testing. The main cluster has a spread of about 10millibar, with one dramatic outlier >20 mbar from the group. These offsets are much wider than the datasheet spec for those 2-bars sensors.
But this is only a starting point: manufacturers have very specific rules about things like the temperature ramps during reflow and it’s unlikely that cheap sensor modules get handled that carefully. Housing installation adds both physical stress and thermal mass which will induce shifts; as can the quality of your supply voltage. Signal conditioning and oversampling options usually improve accuracy, but there are notable exceptions like the BMP/E 280 which suffers from self-heating if you run it at the startup defaults.
As described in our post on waterproofing electronics, we often mount pressure sensors under mineral oil with a nitrile finger cot membrane leading to thermal lag.
Sensors like NTC thermistors are relatively easy to calibrate using physical constants. But finding that kind of high quality benchmark for barometric sensors is challenging if you don’t live near a government-run climate station. So we typicallyuse a normalization processto bring a set of different sensors into close agreement with each other. This is a standard procedure for field scientists, but information on the procedures is hard to find because the word ‘normalization’ means different things in various industry settings. In Arduino maker forums it usually describes scaling the axes from a single accelerometer with (sensor – sensor.min )/( sensor.max – sensor.min ) rather than standardizing a group of different sensors.
When calibrating to a good reference you generally assume that all the error is in your cheap DIY sensor and then do a linear regression by calculating a best fit line with the trusted data on they Y axis of a scatter plot. However, even in the absence of a established benchmark you can use the same procedure with a ‘synthetic’ reference created by drawing an average from your group of sensors:
Note: Sensor #41 was the dramatic outlier more than 20millibar from the group (indicating a potential hardware fault) so this data is not include in the initial group average.
With that average you calculate y = Mx + B correction constantsusing Excel’s slope & intercept functions. Using these formulas lets you copy/paste equations from one data column to the next which dramatically speeds up the process when you are working through several sensors at a time. It also recalculates those constants dynamically when you add or delete information:
The next step is to calculate the difference (residuals) between the raw sensor data and the average: before and after these Y=Mx+B corrections have been applied to the original pressure readings. These differences between the group average and an individual sensor should be dramatically reduced by the Mx+b adjustments:
After you copy/paste these calculations to each sensor, create x/y scatter plots of the residuals so you can examine them side-by-side:
Now we can deal with the most important part of the entire process: Normalization with bad input data will produce even more misleading results. While the errors shown above are centered around zero, the patterns in these graphs indicate that we are not finished. In the ideal case, residuals should usually be soft fuzzy distributions with no observable patterns. But here we have a zigzag that is showing up for most of the sensors. This is an indication that one (or more) of the sensors included in the average has some kind of problem. Scrolling further along the columns identifies the offending sensors with nasty looking residual plots after the corrections have been applied:
Sensor #41 (far right) was already rejected from the general average because of its enormous offset, but the high amplitude jagged residual plots indicate that the data from sensors #45 and #42 are also suspect. If we eliminate those two from the reference average the zigzag pattern disappears from the rest of the sensors in the set:
There’s more we could learn from the residual distributions, but here we’ve simply used them to prune our reference data, preventing bad sensor input from harming the the average we use for our normalization.
And what do the sensor plots look like after the magic sauce is applied?
The same set of barometric pressure sensors, before and after normalization corrections. (minus #41 which could not be corrected)
It’s important to note that there is no guarantee that fitting your sensors to an average will do anything to improve accuracy. However, sensors purchased from different vendors, at different times, tend to have randomly distributed offsets. In that case normalization improves both precision and accuracy, but the only way to know if that has happened is to validate against some external reference like the weather station at your local airport. There are several good long term aggregators that harvest METAR data from these stations like this one at Iowa State, or you can get the most recent week of data by searching for your local airport code at weather.gov
METAR is a format for weather reporting that is predominately used for pilots and meteorologists and they report pressure adjusted to ‘Mean Sea Level’. So you will have to adjust your data to MSL (or reverse the correction on the airport data) before you can compare it to the pressure reported by your local sensors. For this you will also need to know the exact altitude of your sensors when the data was gathered to remove the height offset between your location and the airport stations.
Technically speaking, you could calibrate your pressure sensors directly to those official sources. However there are a lot of Beginner, Intermediate and Advanced details to take care of. Even then you still have to be close enough to know both locations are in the same weather system.
Here I’m just going to use the relatively crude adjustment equation: Station Pressure = SLP – (elevation/9.2) and millibar = inchHg x 33.8639 to see if we are in the ballpark.
Barometric data from the local airport (16 miles away) overlayed on our normalized pressure sensors. It’s worth noting that the airport data is at a strange odd-minute intervals, with frequent dropouts which would complicate a calibration to that reference.
Like most pressure sensors an MS58xx also records temperature because it needs that for internal calculation. So we can repeat the entire process with the temperature readings from this sensor set:
Temperatures °C from a set of MS58xx Pressure sensors: before & after group normalization. Unlike pressure, this entire band was within the ±2ºC specified in the datasheet.
These sensors were sitting pretty far back on a bookshelf that was partly enclosed, so some of them were quite sheltered while others were exposed to direct airflow. So I’m not bothered by the spikes or the corresponding blips in those residual plots. I’m confident that if I had run this test inside a thermally controlled environment (ie: a styrofoam cooler with a small hole in the top) the temperature residuals would have been well behaved.
One of the loggers in this set had a calibrated NTC thermistor onboard. While this sensor had significant lag because it was located inside the housing, we can still use it to check if the normalized temperatures benefit from the same random distribution of errors that were corrected so nicely by the pressure normalization:
Once again, we have good alignment between a trusted reference (in red) and our normalized sensors.
Comments:
Normalization is a relatively low effort way to improve sets of sensors – and it’s vital if you are monitoring systems that are driven primarily by gradients rather than absolute values. This method generalizes to many other types of sensors although a simple y=Mx +B approach usually does not handle exponential sensors very well. As with calibration, the data set used for normalization should span the range of values you expect to gather with the sensors later on.
The method described here only corrects differences in Offset [with the B value] & Gain/Sensitivity [the M value] – more complex methods are needed to correct non-linearity problems. To have enough statistical power for accuracy improvement you want a batch of ten or more sensors and it’s a good idea to exclude data from the first 24 hours of operation so brand new sensors have time to settle. Offsets are influenced by several factors and some sensors need to ‘warm up’ before they can be read. The code driving your sensors during normalization should be identical to the code used to collect data in the field.
All sensor parameters drift so, just like calibration, normalization constants have a shelf life. This is usually about one year, but can be less than that if your sensors are deployed in harsh environments. Fortunately this kind of normalization is easy to redo in the field, and it’s a good way to spot sensors that need replacing. You could also consider airport/NOAA stations as stable references for drift determination.
How do you deal with I2C bus resistance/capacitance issues with so many sensors connected?
I have to add a special mention here of the heroic effort by liutyi comparing different temp. & humidity sensors. While his goal was not normalization, the graphs clearly demonstrate how important that would be if you were comparing a group of sensors. Humidity sensors have always been a thorn in our side – both for lack of inter-unit consistency and because of their short lifespan in the field relative to other types of sensors. The more expensive Sensirons tend to last longer – especially if they are inside one of those protective shells made from sintered metal beads. KanderSmith also did an extensive comparison of humidity sensors with more detailed analysis of things like sensor response time.
This post describes a thermistor calibration achievable by people who don’t have access to lab equipment with an accuracy better than ±0.15°C. This method is particularly suitable for the 10k NTC on our 2-module data logger handling them in a way that is easy to standardize for batch processing (ie: at the classroom scale). We use brackets to keep the loggers completely submerged because the thermal conductivity of the water around the housing is required or the two sensors would diverge. The target range of 0° to 40°C used here covers moderate environments including the underwater and underground locations we typically deploy into. This method is unique in that we use a freezing process rather than melting ice for the 0°C data point.
Use stainless steel washers in your hold-downs to avoid contamination of the distilled water and provide nucleation points to limit super-cooling. Before creating this bracket we simply used zip-ties to hold the washer weights.
Reading a thermistor with digital pins uses less power, and gives you the resistance of the NTC directly from the ratio of two Interrupt Capture Unit times. Resolution is not set by the bit depth of your ADC, but by the size of the reservoir capacitor: a small ceramic 0.1µF [104] delivers about 0.01°C with jitter in the main system clock imposing a second limit on resolution at nearly the same point. Large reservoir capacitors increase resolution and reduce noise but take more time and use more power. The calibration procedure described in this post will work no matter what method you use to read your NTC thermistor.
The I2C reference sensor is connected temporarily during the calibration via Dupont headers. Always give your reference sensors serial numbers so that you can normalize them before doing the thermistor calibrations.
Off-the-shelf sensors can be used as ‘good enough’ reference thermometers provided you keep in mind that most accuracy specifications follow a U-shaped curve around a sweet spot that’s been chosen for a particular application. The Si7051 used here has been optimized for the medical market, so it has ±0.1° accuracy from 35.8 to 41° Celsius, but that falls to ±0.13° at room temperatures and only ±0.25° at the ice point. If you use some other reference sensor (like the MAX30205 or the TSYS01) make sure it’s datasheet specifies how the accuracy changes over the temperature range you are targeting with the calibration.
The shortened Steinhart–Hart equation used here is not considered sufficiently accurate for bench-top instruments which often use a four or five term polynomial. However in ‘The Guide on Secondary Thermometry‘ by White et. al. (2014) the three-term equation is expected to produced interpolation errors of about 0.0025°C over a range from 0 to 50°C, and that is acceptable for most monitoring. To calculate the three equation constants you need to collect three temperature & resistance data pairs which can be entered into the online calculator at SRS or processed with a spreadsheet.
While these technical sources of error limit the accuracy you can achieve with this method, issues like thermal lag in the physical system and your overall technique are more important. In general, you want each step of the calibration process to occur as slowly as possible. If the data from a run doesn’t look the way you were expecting – then do the procedure over again until those curves are well behaved and smooth. Make sure the loggers stay dry during the calibration – switching to spare dry housing tubes between the baths: Moisture is the greatest cause of failure in sensors and humidity/water always lowers the resistance of thermistors. If in doubt, let everything dry for 24 hours before re-doing a calibration.
Data Point #1: The freezing point of water
The most common method of obtaining a 0°C reference is to place the sensor into an insulated bucket of stirred ice slurry that plateaus as the ice melts. This is fine for waterproof sensors on the end of a cable but it is not easily done with sensors mounted directly on a PCB. So we immerse the loggers in collapsible 1200ml silicone food containersfilled withdistilled water. This is placed inside of a well insulated lunch box and the combined assembly is left in the freezer overnight, reading every 30 seconds.
Weighted holders keep each logger completely immersed. Soft-walled silicone containers expand to accommodate any volume change as the water freezes. This prevents the centrifuge tube housings from being subjected to pressure as the ice forms. Position the loggers so that they are NOT in direct contact with the sides or the lid of the silicone container.
The outer box provides insulation to slow down the freezing process. After testing several brands it was found that the Land’s End EZ wipe (9″x8″x4″) and Pottery Barn Kids Mackenzie Classic lunch boxes provided the best thermal insulation because they have no seams on the solid molded foam interior which also doesn’t absorb water spilled while moving the containers around.
For the purpose of this calibration (at ambient pressure) we can treat the freezing point of pure water as a physical constant. So no reference sensor is needed on the logger while you collect the 0°C data. Leave the lunch box in the freezer just long enough for a rind of ice to form around the outer edgeswhile the main volume of water surrounding the loggers remains liquid. I left the set in this photo a bit too long as that outer ice rind is much thicker than it needed to be for the data collection. Do not let the water freeze completelysolid (!) as this will subject the loggers to stress that may crack the tubes and let water in to ruin your loggers.
The larger bubbles in this photo were not present during the freeze, but were created by moving the container around afterward for the photo.
The trick is recognizing which data represents the true freezing point of water. Distilled water super-cools by several degrees, and then rises to 0°C for a brief period after ice nucleation because the phase change releases 80 calories per gram while the specific heat capacity of water is only one calorie, per degree, per gram. So freezing at the outer edges warms the rest of the liquid – but this process is inherently self-limiting which gives you a plateau at exactly 0°C after the rise:
NTC (ohms) gathered during the freeze/thaw process graphed with the y axis is inverted because of the negative coefficient. The warm temperature data has been removed from the graphs above to display only the relevant cold-temperature data. Only the 10-20 minutes of data immediately after the rise from a super cooled state is relevant to the calibration. Cooling the insulated chamber from its room temperature starting point to the supercooling spike shown above took 7-8 hours.
Depending on the strength of your freezer, and the quality of the outer insulating container, the ice-point may only last a few minutes before temperatures start to fall again. An average of the NTC readings from that SHORT plateau immediately after the supercooling ends is your 0°C calibration point. This is usually around 33000 ohms for a 10k 3950 thermistor. Only the data immediately after super cooling ends is relevant and the box can be removed from the freezer any time after that event. I left the example shown abovein the freezer too long but you have a reasonable window of time to avoid this. Once the freeze process initiates, it usually takes about 8 hours for the entire volume to freeze solid – after which you can see the compressor cycling as the now solid block cools below 0°C. You want to pull the sensors out of the freezer before that solid stair-step phase (at 8:00 above) if possible.
If the supercooling spike is not obvious in your data then change your physical configuration to slow the cooling process until it appears. You want the inner surface of your silicone container to have smooth edges, as sharp corners may nucleate the ice at 0°C, preventing the supercooling spike from happening. Use as much distilled water as the container will safely hold -the loggers should be surrounded by water on all sides.
In this image a freezer compressor cycle happened during post supercooling rise making it hard to see where the plateau occurred. This run was re-done to get better data.
Most refrigerators cycle based on how often the door is opened and those cycles can overprint your data making it hard to interpret. If you put a room-temperature box of water in the freezer between 6-7pm, it usually reaches the supercooling point around 2am, reducing the chances that someone will open the refrigerator/freezer door at the critical time. Even then, unexpected thermal excursions may happen if the freezer goes into a defrost cycle or an automatic ice-maker kicks in during the run. The time to reach that supercooling event can be reduced by pre-cooling the distilled water to ~5°C in the refrigerator before the freezer run. If any of the points on your curves are ambiguous, then do that run again, making sure the water is completely ice free at the start.
As a technical aside, the energy released (or absorbed) during the phase change of water is so much larger than its typical thermal content that water based heat pumps can multiply their output significantly by making slushies.
Data Point #2: Near 40°C
We have used the boiling point of water for calibration in the past, but the centrifuge tube housings would soften considerably at those temperatures. Ideally you want to bracket your data with equally spaced calibration points and 100°C is too far from the environmental conditions we are targeting. Heated water baths can be found on eBay for about $50, but my initial tests with a Fisher Scientific IsoTemp revealed thermal cycling that was far too aggressive to use for calibration – even with a circulation pump and many layers of added insulation. So we created an inexpensive DIY version made with an Arctic Zone Zipperless Coldloc hard-shell lunch box and a 4×6 inch reptile heating mat (8 watt). Unlike the ice point which must be done with distilled water, ordinary tap water can be used to collect the two warm temperature data pairs.
These hard-sided Arctic Zone lunch boxes can often be obtained for a few dollars at local charity shops or on eBay.
Place the 8-watt heating pad under the hard shell of the lunch box. At 100% power this tiny heater takes ~24 hours to bring the bath up to ~38°C. The bath temp is relatively stable since the heater does not cycle, but it does experience a slow drift based on losses to the environment. These heating pads sell for less than $15 on Amazon.
To record the temperature inside each logger, an Si7051 breakout module (from Closed Cube) is attached to the logger. A hold down of some kind must keep the logger completely submerged for the duration of the calibration. If a logger floats to the surface then air within the housing can thermally stratify and the two sensors will diverge. That data is not usable for calibration so the run must be done again with that logger.
The reference sensor needs to be as close to the NTC sensor as possible within the housing – preferably with the chip directly over top and facing the NTC thermistor.
Data Point #3: Room Temperature
The loggers stay in the heated bath for a minimum of 4 hours, but preferably 8 -12 hours. The idea is you want the whole assembly to have enough time to equilibrate. Then they are transferred to an unheated water-filled container (in this case a second Arctic Zone lunch box) where they run at ambient temperatures for another 8 -12 hours. This provides the final reference data pair:
Si7051 temperature readings inside a logger at a 30 second sampling interval. The logger was transferred between the two baths at 8am. Both baths are affected by the temperature changes in the external environment.
Detail: Warm temp. NTC ohms (y-axis inverted)
Detail: Room temp. NTC ohms (y-axis inverted)
As the environment around the box changes, losses through the insulation create gentle crests or troughs where the lag difference between the sensors will change sign. So averaging several readings across those inflection pointscancels out any lag error between the reference sensor and the NTC. Take care that you average exactly the same set of readings from both the Si7051 and from the NTC. At this point you should have three Temperature / Resistance data pairs that can be entered into the SRS online calculator to calculate the equation constants ->
I generally use six digits from the reference pairs, which is one more than I’d trust in the temperature output later. I also record the Beta constants for live screen output because that low accuracy calculation takes less time on limited processors like the 328p.
The final step is to use those constants to calculate the temperature from the NTC data with: Temperature °C = 1/(A+(B*LN(ohms))+(C*(LN(ohms))^3))-273.15
Then graph the calculated temperatures from the NTC calibration readings over top of the reference sensor temperatures. Provided the loggers were completely immersed in the water bath, flatter areas of the two curves should overlapone another precisely. However, the two plots will diverge when the temperature is changing rapidly because the NTC exhibits more thermal lag than the Si7051. This is because the NTC is located near the thermal mass of the ProMini circuit board.
Si reference & NTC calculated temperatures: If your calibration has gone well, the curves should be nearly identical as shown above. With exceptions only in areas where the temperature was changing rapidly and the two sensors got out of sync because of different thermal lags.
Also note that the hot and warm bath data points can be collected with separate runs. In fact, you could recapture any individual data pair and recalculate the equation constants with two older ones any time you suspect a run did not go smoothly. Add the constants to all of the data column headers, and record them in a google doc with the three reference pairs and the date of the calibration.
Validation
You should always do a final test to validate your calibrations, because even when the data is good it’s easy to make a typo somewhere in the process. Here, a set of nine calibrated NTC loggers are run together for a few days in a gently circulatingwater bath at ambient temperature –>
(Click to enlarge)
Two from this set are a bit high and could be recalibrated, but all of the NTC temperature readings now fall within a 0.1°C band. This is a decent result from a method you can do without laboratory grade equipment, and the sensors could be brought even closer together by using this validation data to normalize the set.
Comments
The method described above uses equipment small enough to be portable, allowing easy classroom setup/takedown. More importantly this also enablesthe re-calibration of loggers in the field if you have access to a freezer. This makes it possible to re-run the calibrations and then apply compensation techniques to correct for sensor drift. Validating calibration before and after each deployment is particularly important with DIY equipment to address questions about data quality at publication. Glass encapsulated NTC thermistors drift up to 0.02 °C per year near room temperatures, while epoxy coated sensors can drift up to 10x that.
At the ice-point, our resolution is ~0.0025°C but our time-based readings vary by ±0.0075°C. This is due to timing jitter in the ProMini oscillator and in the interrupt handling by a 328p. So with a [104] reservoir capacitor in the timing circuit, our precision at 0°C is 0.015°C.
Having a physical constant in the calibration data is important because most of the affordable reference sensors in the Arduino landscape were designed for applications like healthcare, hvac, etc. So they are usually designed minimize error in warmer target ranges, while getting progressively worse as you approach 0°C. But accuracy at those lower temperatures is important for environmental monitoring in temperate climates. The method described in this post could also be used to calibrate commercial temperature sensors if they are waterproof.
Calibrating the onboard thermistor a good idea even if you plan to add a dedicated temperature sensor because you always have to do some kind of burn-in testing on a newly built logger – so you might as well do something productive with that time. I generally record as much data as possible during the calibration to fill more memory and flag potentially bad areas in the EEprom. (Note: Our code on GitHub allows only 1,2,4,8, or 16 bytes per record to align with page boundaries) . And always look at the battery record during the calibration as it’s often your first clue that a DIY logger might not be performing as expected. It’s also worth mentioning that if you also save the RTC temperatures as you gather the NTC calibration data, this procedure gives you enough information to calibrate that register as well. The resolution is only 0.25°C, but it does give you a way to check if your ‘good’ temperature sensors are drifting because the DS3231 tends to be quite stable.
While the timing jitter does not change, non-linearity of the NTC resistance reduces the resolution to 0.005°C. Precision at 35°C also suffers, falling to 0.02°C. Using a 10x larger [105] reservoir cap would get us back to resolution we had at 0°C, as would oversampling which actually requires this kind of noise for the method to work. Either of those changes would draw proportionally more power from the coincell for each read so its a tradeoff that might not be worth making when you consider sensor lag.
For any sensor calibration the reference points should span the range you hope to collect later in the field. To extend this procedure for colder climates you could replace the ice point with the freezing point of Galinstan (-20°C) although a domestic freezer will struggle to reach that. If you need a high point above 40°C, you can use a stronger heat source. Using two of those 8 watt pads in one hard sided lunch box requires some non-optimal bending at the sides, but it does boost the bath temp to about 50°C. 3D printed PLA hold-downs will start to soften at higher temps so you may need to alter the design to prevent the loggers from popping out during the run.
If your NTC data is so noisy you can’t see where to draw an average, check the stability of your regulator because any noise on the rail will affect the Schmitt trigger thresholds used by our ICU/timer method. This isn’t an issue running from a battery, but even bench supplies can give you noise related grief if you’ve ended up with some kind of ground loop. You could also try oversampling, or a leaky integrator to smooth the data – but be careful to apply those techniques to both the reference and the NTC in exactly the same way because they introduce significant lag. Temperature maximums are underestimated and temperature minimums are overestimated by any factor that introduces lag into the system. In general, you want to do as little processing to raw sensor readings as possible at capture time because code-based techniques usually require some prior knowledge of the data range & variation before they can be used safely. Also note that our digital pin ICU based method for reading resistors does not work well with temperature compensated system oscillators because that compensation circuitry could kick in between the reference resistor and NTC readings.
And finally, the procedure described here is not ‘normalization’, which people sometimes confuse with calibration. In fact, it’s a good idea to huddle-test your sensors in a circulating water bath after calibration to bring a set closer together even though that may not improve accuracy. Creating post-calibration y=Mx+B correction constants is especially useful for sensors deployed along a transect, or when monitoring systems that are driven by relative deltas rather than by absolute temperatures. Other types of sensors like pressure or humidity have so much variation from the factory that they almost always need to be normalized before deployment – even on commercial loggers.
2023 is the ten-year anniversary of the Cave Pearl Project, with hundreds of loggers built from various parts in the Arduino ecosystem and deployed for Dr. Beddows research. During that time her EARTH 360 – Instrumentation course evolved from using commercial equipment to having students assemble a logging platform for labs on environmental monitoring. The experience of those many first-time builders has been essential to refining our educational logger design to achieve maximum utility from a minimum number of components. So, in recognition of their ongoing and spirited enthusiasm, we call this new model the e360.
A standard 50mL centrifuge tube forms the housing, which is waterproof to about 8 meters depth. For better moisture resistance use tubes with an O-ring integrated into the cap (made of silicone or ethylene propylene) which gets compressed when the threads are tightened.
A bracket for the logger which can be installed horizontally or vertically. Zip ties pass through the central support to wrap around the 50ml centrifuge tube. This prints without any generated supports and the STL can be downloaded for printing from the Github repository for this logger.
Many parallel trends have advanced the open-source hardware movement over the last decade, including progress towards inexpensive and (mostly) reliable 3D printing. In keeping with the project’s ethos of accessibility, we use an Ender 3 for the rails and you can download that printable stl file directly from Tinkercad. Tinkercad is such a beginner-friendly tool that students are asked to create their own logger mounting brackets from scratch as as an exercise in the Lux/LDR calibration lab. This directly parallels our increasing use of 3D prints for installation brackets & sensor housings on the research side of the project.
One of the things that distinguishes this project from others in the open science hardware movement is that instead of constantly adding features like IOT connectivity, we have been iterating towards simplicity. Cheap, flexible, stand-alone loggers enable many teaching and research opportunities that expensive, complicated tools can not. However there are a few trade-offs with this minimalist 2-module design: Supporting only Analog & I2C sensors makes the course more manageable but loosing the DS18b20, which has served us so well over the years, does bring a tear to the eye. Removing the SD card used on previous EDU models means that memory becomes the primary constraint on run-time. The RTC’s one second alarm means this logger is not suitable for higher frequency sampling and UV exposure makes the 50ml tubes brittle after 3-4 months in full sun. Coin cell chemistry limits operation to environments that don’t go far below freezing – although it’s easy enough to run the logger on two lithium AAA‘s in series and we’ve tested those down to -15°C.
The basic logger kit costs about $10 depending on where you get the DS3231 RTC & 3.3v/8MHz ProMini modules. Pre-assembly of the UART cable, NTC cluster & LED can be done to shorten lab time. CP2102 6pin UARTs are cheap, and have good driver support, but you have to make that Dupont crossover cable because the pins don’t align with the ProMini headers.
Sensor modules for lab activities: TTP233 touch, BMP280 pressure, BH1750 lux, AM312 PIR, 1k &10k pots and a sheet of metal foil for the capacitive sensing lab. Other useful additions are a Piezo Buzzer, a 0.49″ OLED and 32k AT24c256 EEproms. The screen is $4, but the other parts should should cost about $1 each.
You can find all the parts shown here on eBay and Amazon – except for the rail which needs to be printed but these days it’s relatively easy to send 3D models out to an printing service if someone at your school doesn’t already have a printer. Expect 15% of the parts from cheap suppliers like eBay or Amazon to be high drain, or simply DOA. Weorder three complete lab kits per student to cover defects, infant mortality, and replacement of parts damaged during the course. This is usually their first time soldering and some things will inevitably get trashed in the learning process – but that’s OK at this price point. We also order each part from three different vendors, in case one of them is selling rejects from a bad production run. The extra parts allow students to build a second or third logger later on in the course which is often needed for their final project.
I’ve used short jumpers here to make the connections clear, but it’s better use longer wires from a 20cm, F-F Dupont ribbon to make these cables. Only the 3.3v output from the Cp2102 gets connected to the logger.
Cp2102 UART
ProMini
DTR
->
DTR
RXD
->
TXO
TXD
->
RXI
GND
->
GND
3V3
->
VCC
Macintosh USBc to USBa adapters are smart devices with chips that will shut-down if you unplug from the computer with a battery powered logger is still connected. The coin cell back-feeds enough voltage to put the dongle into an error state. Always disconnect the UART to logger connection FIRST instead of simply pulling the whole string of still-connected devices out of the computer.
After installing OS drivers for your UART, you need to select the IDE menu options: [1] TOOLS > Board: Arduino Pro or Pro Mini [2] TOOLS > Processor: ATmega328 (3.3v, 8mhz) [3] TOOLS > Port: Match the COM# or /dev that appears when you connect the UART
On this UART the 5v connection had to be cut with a knife before soldering the 3.3v pads together to set the output voltage.
For many years we used FT232s, but the current windows drivers will block operation if you get one of the many counterfeit chips on the market. If you do end up with one of those fakes only OLD drivers from 2015 / 2016 will get that UART working with the IDE. To avoid that whole mess, we now use Cp2102’s or Ch340’s. Some UARTs require you to cut or bridge solder pads on the back side to set the 3.3v that an 8MHz ProMini runs on. Many I2C sensor modules on the market also require this lower voltage. Avoid Pro Mini’s with the much smaller 328P-MU variant processors. They may be compatible inside the chip but the smaller solder pad separation makes the overall logger noticeably more susceptible to moisture related problems later.
Assembling the logger:
This e360 model is based on the 2-Module logger we released in 2022, with changes to the LED & NTC connections to facilitate various lab activities in the course. That post has many technical details about the logger that have been omitted here for brevity, so it’s a good idea to read through that extensive background material when you have time.
Prepare the RTC module:
Clipping the Vcc supply leg (2nd leg in from the corner) puts the DS3231 into a low-power mode powered by backup battery, and also disables the 32k output.
Disconnect the indicator LED by removing its limit resistor.
Remove the 200Ω charging resistor, and bridge the Vcc via to the battery power trace at the black end of diode.
Cutting the VCC input leg forces the clock to run on VBAT which reduces the DS3231 chip constant current to less than 1µA, but that can spike as high as 550µA when the TCXO temperature reading occurs (every 64 seconds). The temp. conversions and the DS3231 battery standby current averages out to about 3µA so the RTC is responsible for most of the power used by this logger over time. If the time reads 2165/165/165 instead of the normal startup default of 2000/01/01 then the registers are bad and the RTC will not function. Bridging Vcc to Vbat means a 3.3V UART will drive some harmless reverse current through older coin cells while connected. DS3231-SN RTCs drift up to 61 seconds/year while -M chips drift up to 153 sec/year. If the RTCs temperature readings are off by more than the ±3°C spec. then the clocks will drift more than that.
It’s a good idea to do a breadboard test of those RTC modules (with the logger base-code) before assembling your logger.
Modify & Test the Pro Mini:
A Pro Mini style board continues as the heart of the logger, because they are still the cheapest low-power option for projects that don’t require heavy calculations.
Carefully clip the 2-leg side of the regulator with sharp side-snips and wobble it back and forth till it breaks the 3 legs on the other side.
Remove the limit resistor for the power indicator LED with a hot soldering iron tip.
Clip away the reset switch. This logger can only be started with serial commands via a UART connection.
Add 90° UART header pins and vertical pins on D2 to D6. Also add at least one analog input (here shown on A3). Students make fewer soldering errors when there are different headers on the two sides of the board for orientation.
Bend these pins inward at 45° and tin them for wire attachments later. D4 & D5 are used for the capacitive sensor lab.
Do not progresswith the builduntilyou have confirmed the ProMini has a working bootloader by loading the blink sketch onto it from the IDE.
Add the NTC/LDR Sensors & LED indictor
These components are optional, but provide opportunities for pulse width modulation and sensor calibration activities.
Join a 10k 3950 NTC thermistor, a 5528 LDR, a 330Ω resistor and a 0.1µF [104] ceramic capacitor. Then heat shrink the common soldered connection.
Thread these through D6=LDR, D7=NTC, D8=300Ω, and the cap connects to groundat the end of the Pro Mini module. Note that the D6/D7 connections could be any resistive sensors up to a maximum value of 65k ohms.
Solder the sensor cluster from the bottom side of the Pro Mini board and clip the tails flush with the board. Clean excess flux with alcohol & a cotton swab.
The way we read resistive sensors using digital pins is described in this post from 2019 although to reduce part count & soldering time in this e360 model we use the 30k internal pullup resistor on D8 as the reference value that the NTC and LDR get compared to. We have another post describing how to calibrate NTC thermistors in a classroom setting. Noise/Variation in the NTC temperature readings is ±0.015°C, so the line on a graph of rapid 1second readings is usually about 0.03°C thick. Range switching with two NTCs could also be done if the max / min resistance values of one thermistor can’t deliver the resolution you need.
Add a 1/8 watt 1kΩ limit resistor to the ground leg of a 5mm common cathodeRGB led. Extend the red channel leg with ~5 cm of flexible jumper wire.
Insert Blue=D10, Green = D11, GND = D12. Solder these from the under side of the Pro Mini and clip the excess length flush to the board.
Bring the red channel wire over and solder it through D9. Note that if the RGB is not added, the default red LED on D13 can still be used as an indicator.
You can test which leg of an LED is which color with a cr2032 coin cell using the negative side of the battery on the ground leg. The LED color channels are soldered to ProMini pins R9-B10-G11 and a 1k limit resistor is added to the D12-GND connection to allow multi-color output via the PWM commands that those pins support.
Join the Two Modules via the I2C Bus:
Use legs of a scrap resistor to add jumpers to the I2C bus connections on A4 (SDA) and A5 (SCL). Trim any long tails from these wires left poking out from the top side of the ProMini.
Cover the wires with small diameter heat-shrink and bend them so theycross over each other. The most common build error is forgetting the to cross these wires.
Use another scrap resistor to extend the Vcc and GND lines vertically from the tails of the UART headers. This is the most challenging solder joint of the whole build.
Add a strip of double-sided foam tape across the chips on the RTC module and remove the protective backing.
Carefully thread the I2C jumpers though the RTC module.
Press the two modules together and check that the two boards are aligned.
Check that the two I2C jumpers are not accidentally contacting the header pins below, then solder all four wires into place on the RTC module.
Bend the GND wire to the outer edge of the module, and trim the excess from the SDA and SCL jumpers. Adding a capacitor to the power wires helps the coin cell handle brief loads:
Optional: Solder a 440µF [477A] to 1000µF [108J] tantalum capacitor to the VCC and GND wires. Clip away the excess wire.
Tin the four I2C headers on the RTC module and the SQW alarm output pin.
Join the RTC’s SQW output to the header pin on D2 with a short length of flexible jumper wire. At this point the logger core is complete and could operate as a stand-alone unit.
Bend the four I2C header pins up to 45 degrees.
As soon as you have the two modules together: connect the logger to a UART and run an I2C bus scanning program to make sure you have joined them properly. This should report the DS3231 at address 0x68, and the 4K EEprom at 0x57.
Add Rails & Breadboard Jumpers:
Clip the bottom posts away from two 25 tie-point mini breadboards.
Insert the breadboards in the rails. Depending on the tolerance of your 3D print, this may require more force and/or a deburring tool to make the hole larger.
Mount the breadboards flush with the upper surface of the rails. If the they are too loose in your print they can be secured quickly with a drop of hot glue or cyanoacrylate super-glue sprinkled with a little baking soda to act as an accelerant.
The 3D printed rails have a pocket cutout for the logger stack. The RTC module board should sit flush with the upper surface of the rail. Hot glue can be applied from the underside through the holes near the corners to hold the logger to the rails…
or thin zip ties or twisted wire can hold the logger stack in place. The legs of a scrap resistor can be used if the holes on your RTC module are too small for zips. (see 1:06 in the build video)
Check that the RTC pcb is flush in the pocket at opposite diagonal corners.
Cut two 14 cm lengths of 22AWG solid core wire. Insert stripped ends into the breadboards as shown, then route though the holes in the rail.
Secure the wires from the underside with a zip tie. Note: the ‘extra’ holes in the rail are used to secure small desiccant packs during deployment.
Route the solid core wires along the side of the breadboard and back out through the two inner holes near the logger stack.
The green wire should exit on the analog side of the Pro Mini and the blue wire should be on the digital side.
Route and trim the green wire to length for the A3 header.
Strip, tin and solder the wire to the A3 pin.
Repeat the process for the blue wire, connecting it to D3.
Extend the four I2C headers on the RTC module with 3cm solid core jumpers. Here, white is SDAta and yellow is SCLock.
Bend the jumpers into the breadboard contacts. Bmp280 and Bh1750 sensor modules usually require this crossover configuration.
A video covering the whole assembly process:
NOTE: For people with previous building experience we’ve also posted a 4 minute Rapid Review.
The base code requires the RocketScream LowPower.h library to put the logger to sleep between readings and this can be installed via the library manager in the IDE. In addition to the included NTC / LDR combination, the code has support for the BMP/E280, BH1750(lux), and PIR sensors although you will need to install libraries (via the IDEs library manager) for some of them. Sensors are added by uncommenting define statements at the beginning of the code. Each sensor enabled after the single-byte LowBat & RTCtemp defaults contributes two additional bytes per sampling event because every sensors output gets loaded into a 16-bit integer variable.
The basic sensors cover light, temperature, pressure and humidity – so you could teach an introductory enviro-sci course by enabling or disabling those sensors before each lab. Note: while the BME280 is quite good for indoor measurements where very high RH% occurs rarely; SHT30 or AM2315C sensors encapsulated in water resistant PTFE shells are better choices for long term weather stations.
Bmp280 outputs can be saved individually. Total bytes per sampling record must be 1, 2, 4, 8 or 16 ONLY. You may need to add or remove RTC temp or Current Battery to make the byte total correct for a new sensor.
But limiting this tool to only the pre-configured sensors would completely miss the point of an open source data logger project. So we’ve tried to make the process of modifying the base-code to support different sensors as straight forward as possible. Edits are required only in the places indicated by call-out numbers on the following flow diagrams. These sections are highlighted with comments labeled: STEP1, STEP2, STEP3, etc. so you can locate them with the find function in the IDE.
Those comments are also surrounded by rows of +++PLUS+++ symbols: //++++++++++++++++++++++++++++++++++++++++++ // STEP1 : #include libraries & Declare Variables HERE //++++++++++++++++++++++++++++++++++++++++++
In Setup()
2024 note: Additional start-menu options have been added since this graphic was created in 2023, and there are a few additional debugging options that are not displayed unless serial output is enabled.
A UART connection is required to access the start-up menu through the serial monitor window in the IDE. This menu times-out after 8 minutes but the sequence can be re-entered at any timeby closing and re-opening the serial monitor. This restarts the Pro Mini via a pulse sent from the UARTs DTR (data terminal ready) pin. The start-up menu should look similar to the screen shot below, although the options may change as new code updates get released:
If you see random charactersin theserialwindow, you have the baud rate set incorrectly. Set the baud to 500,000 (with the pulldown menu on the lower right side of the serial monitor window) and the menu should display properly after you close & re-open the window. If you Ctrl-A & Ctrl-C to copy data from the serial monitor when the window still has garbled characters displayed, then only the bad starting characters will copy out. On a new logger: Hardware, Calibration & Deployment fields will display as a rows of question marks until you enter some text via each menu option.
The first menu option asks if you want to download data from the logger after which you can copy/paste everything from the serial window into a spreadsheet. Then, below the data tab in Excel, select Text to Columns to divide the data into separate columns at the comma separators. Or you can paste into a text editor and save a .csv file for import to other programs. While this transfer is a bit clunky, everyone already has the required cable and retrieval is driven by the logger itself. We still use the legacy 1.8.x version of the IDE, but you could also do this download with a generic serial terminal app. You can download the data without battery power once the logger is connected to a UART. However, you should only set the RTC after installing a battery, or the time will reset to 2000/01/01 00:00 when the UART is disconnected. No information is lost from the EEprom when you remove and replace a dead coin cell.
A Unix timestamp for each sensor reading is reconstructed during data retrieval by adding successive second-offsets to the first record time saved during startup. It is important that you download old data from a previous run before changing the sampling interval because the interval stored in memory is used for the calculation that reconstructs each records timestamp. This technique saves a significant amount of our limited memory and =(Unixtime/86400) + DATE(1970,1,1) converts those Unix timestamps into Excel’s date-time format. Valid sampling intervals must divide evenly into 60 and be less than 60. Short second-intervals are supported for rapid testing & debugging, but you must first enter 0 for the minutes before the seconds entry is requested. The unit will keep using the previous sampling interval until a new one is set. It helps to have a utility like Eleven Clock running so that you have HH:MM:SS displayed on your computer screen when setting the loggers clock.
The easiest way to measure the supplied voltage while the logger is connected to USB/UART power is at the metal springs in the Dupont cable.
Vref compensates for variations in the reference voltage inside the 328p processor. Adjusting that constant up or down by 400 raises/lowers the reported voltage by 1 millivolt. Adjust this by checking the voltage supplied by your UART with a multimeter while running the logger with #define logCurrentBattery enabled and serial output Toggled ON at a 1 second interval. Note the difference between the millivolts you actually measured and the battery voltage reported on the serial monitor and then multiply that by 400 to get the adjustment you need to make to the 1126400 value for vref. Restart and save this new number with the [ ] Change Vref menu option and repeat this procedure until the battery reading on screen matches what you are measuring with the DVM. This adjustment only needs to be done once as the number you enter is stored in the 328p EEprom for future use. Note that most loggers run fine with the default 1126400 vref although some units will shutdown early because they are under-reading. It’s rare to get two constants the same in a classroom of loggers so you can use student initials + vref as unique identifiers for each logger. If you do get a couple the same you can change the last two digits to to make unique serial numbers without affecting the readings. The battery readings have an internal resolution limit of 16 millivolts, so ±20mv is as close as you can get on screen.
After setting the time, the sampling interval, and other operating parameters, choosing [ ] START logging will require the user to enter an additional ‘start’ command. Only when that second ‘start’ confirmation is received does old data get erased by pre-loading every memory location in the EEprom with zero. A zero-trap is required on the first byte of each record because those preloaded zeros also serve as the End-Of-File markers later during download. (Note: If you leave the default LogLowestBattery enabled that is already done for you) LEDs then ‘flicker’ rapidly to indicate a synchronization delay while the logger waits to reach the first aligned sampling time so the code can progress from Setup() into the Main Loop().
In the main LOOP()
If all you do is enable sensors via defines at the start of the program you won’t have to deal with the code that stores the data. However to add a new sensor you will need to make changes to the I2C transaction that transfers those sensor readings into the EEprom (and to the sendData2Serial function that reads them back later). This involves dividing your sensor variables into 8-bit pieces and adding those bytes to the wire transfer buffer. This can be done with bit-math operations for long integers or via the lowByte & highByte macros for 16-bit integers. The general pattern when sending bytes to an I2C EEprom is:
Wire.beginTransmission(EEpromAddressonI2Cbus); // first byte in I2C buffer Wire.write(highByte(memoryAddress)); // it takes two bytes to specify the Wire.write(lowByte(memoryAddress)); // memory location inside the EEprom
loByte = lowByte(SensorReadingIntergerVariable); Wire.write(loByte); // adds 1st byte of sensor data to wire buffer hiByte = highByte(SensorReadingIntegerVariable); Wire.write(hiByte); // adds 2nd byte of sensor data to the buffer
— add more Wire.write statements here as needed for your sensors —
The saved bytes must total of 1, 2, 4, 8 or 16 in each I2C transaction. Powers of Twobyte incrementsare required because the # of bytes saved per sampling event must divide evenly into the physical page limit inside each EEprom, which is also a power of two in size. The code will display a warning on screen of bytesPerRecord is not a power of two.
Wire.endTransmission(); // Only when this command executes do the bytes accumulated in the wire buffer actually get sent to the EEprom.
The key insight here is that the wire library is only loading the bytes into a memory buffer until it reaches the Wire.endTransmission() command. So it does not matter how much time you spend adding (sensor variable) bytes to the transaction so long as you don’t start another I2C transaction while this one is in progress. Once that buffered data has been physically sent over the wires, the EEprom enters a self-timed writing sequence and the logger reads the rail voltage immediately after the write process begins. The only way to accurately gauge the state of a lithium battery is to check it while it is under this load.
NOTE: The data download function called in setup retrieves those separate bytes from the EEprom and concatenates them back into the original integer sensor readings for output on the serial monitor. So the sequence of operations in the sendData2Serial retrieval functionmust exactly match the order used in the main loop to load sensor bytes into the EEprom.
Adding Sensors to the Logger:
By default, the logger records the RTC temperature (#define logRTC_Temperature) at 0.25°C resolution and the battery voltage under load (#define logLowestBattery). These readings are compressed to only one byte each by scaling after subtracting a fixed ‘offset’ value. This allows about 2000 readings to be stored on the 4k (4096byte) EEprom which allows 20 days of operation at a 15-minute sampling interval.
A typical RTC temperature record from a logger installed into a cave early in the project. The datasheet spec is ±3° accuracy, but most are within ±0.5° near 25°C. If you do a simple y=Mx+B calibration against a trusted reference sensor, the RTC temperatures are very stable over time. The RTC updates it’s temperature register every 64 seconds so there is no benefit from reading it more frequently than once per minute.
That 4k fills more quickly if your sensors generate multiple 2-byte integers but larger 32k (AT24c256) EEproms can easily be added for longer running time. These can be found on eBay for ~$1 each and they work with the same code after you adjust the define statements for EEpromI2Caddr & EEbytesOfStorage at the start of the program.
This Bmp280 pressure sensor matches the connection pattern on this 32k EEprom module. So the two boards can be soldered onto the same set of double-length header pins.
Vertical stacking allows several I2C modules to fit inside the 50mL body tube. Any I2C sensor breakouts could be combined this way provided they have different bus addresses.
The pullup resistors on the sensor modules can usually be left in place as the logger will operate fine with a combined parallel resistance as low as 2.2k ohms. No matter what sensor you enable, always check that the total of all bytes stored per pass through the main loop is1,2,4,8 or 16or you will get a repeating data error when the bytes transmitted over the I2C bus cross a physical page boundary inside the EEprom. This leads to a wrap-around which over-writes data at the beginning of the memory block. Also note that with larger EEproms you may need to slow the serial communications down to only 250k BAUD to prevent the occasional character glitch that you sometimes see with long downloads at 500k.
Perhaps the most important thing to keep in mind is that breadboards connect to the module header pins via tiny little springs which are easily jiggled loose if you bump the logger. Small beads of hot glue can be used to lock sensor modules & wires into place on the breadboard area. ALSO add another drop can help secure the Cr2032 battery in place for outdoor deployments. Some sensors can handle a momentary disconnection but most I2C sensors require full re-initialization or they will not deliver any more data after a hard knock jiggles the battery contact spring. So handle the logger gently while it’s running – no tossing them in a backpack full of books! Many students make additional no-breadboard loggers with fully soldered connections to the sensor modules if their final projects require rough handling. It’s also a good idea to put 1 gram (or 2 half-gram) silica gell desiccant packswith color indicator beads inside the body tube for outdoor deployments. A change in the indicator bead color is the only way to know if moisture is somehow seeping in, potentially causing runtime problems or early shutdown.
The base code also includes the DIGITAL method we developed to read the NTC/LDR sensors. On this new build we used the internal pullup resistor on D8 as a reference to free up another digital pin. The blue jumper wire on D3 (the 2nd external interrupt) can wake the logger with high / low signals. This enables event timing and animal tracking. Pet behavior is a popular theme for final student projects.
The TTP233 can detect a press through 1-2mm of flat plastic but it does not read well through the curved surface of the tube. In open air it triggers when your finger is still 1cm away but the sensitivity can be reduced by adding a trimming capacitor.
The AM312 draws <15µA and has a detection range of ~5m through the centrifuge tube. This sensor has a relatively long 2-4 second reset time and will stay high continuously if it gets re-triggered in that time. Our codebase supports counting PIR detections OR using the PIR to wake the logger for other sensor readings instead of the standard RTC alarm.
These 0.49″ micro OLEDs sleep at 6µA and usually draw less than a milliamp displaying text at 50% contrast. However, like all OLEDs they send wicked charge-pump spikes onto the supply rails. A 220 or 440µF tantalum right next to them on the breadboard will suppress that noise. Sleep the ProMini while the pixels are turned on to lower the total current load on the battery.
These displays run about two weeks on a coin cell if you only turn them on briefly at 15 minute intervals, depending on contrast, pixel coverage, and display time. It might also be possible to completely depower them when not in use with a mosfet like the TN0702N3.
These OLEDs are driven by a 1306 so you can use standard libraries like GriemansSSD1306Ascii which can be installed via the library manager. However, the mini screens only display a weirdly located sub-sample of the controllers 1k memory – so you have to offset the X/Y origin points on your print statements accordingly.
While I2C sensors are fun, we should also mention the classics. It is often more memorable for students to see or hear a sensors output, and the serial plotter is especially useful for lessons about how noisy their laptop power supply is…
If you twist the legs 90°, a standard potentiometer fits perfectly into the 25 tie-point breadboard for ADC control of PWM rainbows on the RGB LED.
Light-theremin tones map onto the squawky little Piezo speaker and alligator clips make it easy to try a variety of metal objects in the Capacitive Sensing lab.
If you run a lab tethered to the UART for power, then your only limitation is the 30-50 milliamps that those chips normally provide. This is usually enough for environmental sensors although some GPS modules will exceed that capacity. If a high-drain GPS module or a infrared CO2 sensor is required then use those you will need one of the previous AA powered loggers from the project.
When running the logger in stand alone mode your sensors have to operate within the current limitations of the CR2032 coin cell. This means sensors should take readings below 2mA and support low-power sleep modes below 20µA (ideally < 2µA). Order 3.3v sensor modules without any regulators – otherwise the 662k LDO on most eBay sensor modules will increase logger sleep current by ~8µA due to back-feed leakage through the reg. Sensors without regulators usually have -3.3v specified in the name, so a GY-BME280-3.3v humidity sensor has no regulator, but most other BME280 modules will have regulators.
The best sensor libraries to choose should support three things: 1) one-shot readings that put the sensor into a low power sleep mode immediately after the readings 2) give you the ability to sleep the ProMini processor WHILE the sensor is generating those readings and 3) use integer mathematics for speed and a lower memory footprint. Many sensors can read at different resolutions using a technique called oversampling but creating high resolution (or low noise) readings with this method takes exponentially more power. So you want your library to let you set the sensor registers to capture only the resolution you need for your application. The library should also have some way set the I2C address to match your particular sensor module as most sensors support different addresses depending on which pin is pulled up. Always have a generic I2C bus scanning utility handy to check that the sensor is showing up on the bus at the expected address after you plug it into the breadboard (and restart the logger).
Logger Operation:
The logger usually draws peak currents near 3.3mA although this can be increased by sensors and OLEDs. The logger typically sleeps between 5 – 10µA with a sensor module attached. Four 5mA*30millisecond (CPU time) sensor readings per hour gives a maximum battery lifespan of about one year. So the logger is usually more limited by memory than the 100mAh available from a Cr2032. The tantalum rail-buffering capacitor only extends operating life about 20% under normal conditions, but it becomes more important with poor quality coin cells or in colder environments where the battery chemistry slows down:
A BMP280 sampling event with NO rail buffering capacitor draws a NEW coin cell voltage down about 50mv during the logging events…
…while the voltage on an OLD coin cell falls by almost 200 millivolts during that same event on the same logger – (again with NO rail buffer cap)
Adding a 1000µF [108j] tantalum rail buffer to that same OLD battery supports the coin cell, so the logging event now drops the voltage less than 20mV.
The code sleeps permanently when the battery reading falls below the value defined for systemShutdownVoltage which we usually set at 2850mv because many 328p chips trigger their internal brown-out detector at 2.77v. And the $1 I2C EEprom modules you get from eBay often have an operational limit at 2.7v. If you see noisy voltage curves there’s a good chance poor battery contact is adding resistance: secure the coincell with a drop of hot glue before deployment.
High & Low battery readings (in mv) from an e360 logging Pressure & Temp from a BMP280 @30min intervals to a 32k I2C EEprom (full in 84 days). This unit slept at 3.2µA and had a 1000uF tantalum rail capacitor. This kept battery droop below 50mv during the data save, stabilizing above 3000mv.
Same configuration & new Cr2032 at start, only this logger had NO rail capacitor and slept at only 2µA. Despite burning less power during sleep, the coin cell voltage droop during data saves was ~100mv, so the battery plateau is closer to the low battery shutdown at 2800mv.
When testing sleep current on a typical batch of student builds, some will seem to have anomalously high sleep currents in the 600-700µA range. Often that’s due to the RTC alarm being on (active low) which causes a constant drain through the 4k7 pullup resistor on SQW until the alarm gets turned off by the processor. Tantalum capacitors are somewhat heat sensitive, so beginners can damage them while soldering and in those cases they my turn into a short. A typical student logger should draw between 3-10µA when it is sleeping between sensor readings, and if they are consistently 10x that, replacing an overheated rail capacitor may bring that down to the expected sleep current. Also check that the ADC is disabled during sleep, as that will draw ~200µA if it is somehow left on. Occasionally you run into a ProMini clone with fake Atmel 328P chips that won’t go below ~100µA sleep no matter what you do. Buteven with a high 100µA sleep current logger, a new Cr2032 should run the logger for about a month measuring something like temperature, and 10 days with a high power demand sensor like the BME280. This is usually enough run time for data acquisition within your course schedule even if the high drain issue does not get resolved.
Occasionally you get an RTC module with a weak contact spring making the logger quite vulnerable to bumps disconnecting power. A small piece of double sided foam tape can be used to make the battery contact more secure before any outdoor deployments (although this is somewhat annoying to remove afterward). Note that on this unit hot glue was used to affix the logger to the printed rail instead of zip ties.
A few student loggers will still end up with hidden solder bridges that require a full rebuild. This can be emotionally traumatic for students until they realise how much easier the process goes the second time round. Once you’ve made a few, a full logger can usually be assembled in less than 1.5 hours. It’s even faster if you make them in batches so that you have multiple units for testing at the same time. Order a box of 200 cheap coin cell batteries before running a course, because if a student accidentally leaves a logger running blink (as opposed to the logger code) it will drain a battery flat in a couple of hours. This happens frequently.
Measuring tiny sleep currents in the µA range can not be done easily with cheap multimeters because they burden to whole circuit with extra resistance. So you need a special tool like the uCurrent GOLD multimeter adapter (~$75) or the Current Ranger from LowPowerLab (~$150 w screen & battery). The only commercial device for measuring tiny currents that’s remotely affordable is the AltoNovus NanoRanger, but that’s more than twice the price of the µC or the Ranger. You can also do the job with an old oscilloscope if you have that skillset. Again, sleep current is more of a diagnostic tool so you can usually run a course without measuring the sleep currents by simply running the loggers for a week between labs while recording the battery voltage. If that graph plateaus (after a week of running) near 3v and just stays there for a long time – your logger is probably sleeping at the expected 3-5µA.
Running the labs:
The basic two module combination in this logger (without any additional sensors) can log temperature from the RTC. This ±0.25°C resolution record enables many interesting temperature-related experiments, for example:
Log the temperature inside your refrigerator for 24 hours to establish a baseline
Defrost your freezer and/or clean the coils at the back of the machine
Log the temperature again for 24 hours after the change
Calculate the electric power saved by comparing the compressor run time (ie: while the temperature is falling) before & after the change
Note that the housing, the air inside it, and the thermal inertia of the module stack, result in ~5 to 10 minutes lag behind temperature changes outside the logger.
The small form factor of the e360 enables other benchtop exercises like this Mason jar experiment. The loggers are sealed in a cold jar taken directly from the freezer with a BMP280 sampling pressure & temp. every 15 seconds. The jars are then placed in front of a fan which brings them to room temp in ~45 minutes. For a comparison dataset, this experiment can also be done in reverse – sealing the loggers in room temperature jars which are then cooled down in a freezer; although that process takes longer.
Through no fault of their own, students usually have no idea what messy real-world data looks like, and many have not used spreadsheets before. So you will need to provide both good and bad example templates for everything, but that’s easy enough if you ran the experiment a dozen times yourself at the debugging stage.
Even then students will find creative ways to generate strange results: by using a cactus for the evapotranspiration experiment or attempting the light sensor calibration in a room that never rises beyond 100 Lux. Deployment protocols (Sensor Placement, etc.) are an important part of any environmental monitoring course, and ‘unusable data’ ( though the logger was working ) is the most common project failure. It is critical that students download and send graphs of the data they’ve captured frequently for feedback before their project is due. Without that deliverable, they will wait until the hour before a major assignment is due before discovering that their first (and sometimes only) data capturing run didn’t work. This data visualization is required for ‘pre-processing’ steps like the synchronization of different time series and for the identification of measurements from periods where the device was somehow compromised. Your grading rubric has to be focused on effort and understanding rather than numerical results, because the learning goals can still be achieved if they realize where things went wrong.
The temperature readings have serious lag issues while the pressure readings do not. A good lesson in thinking critically about the physical aspects of a system before trusting a sensor. With the built-in 4096 byte EEprom, saving all three 2-byte Bmp280 outputs (temp, pressure & altitude) plus two more bytes for RTCtemp & battery, gives you room for 512 of those 8-byte records. If you sample every fifteen seconds, the logger will run for two hours before the RTC’s 4k memory is full. The test shown above was done with a ‘naked’ logger, but when the loggers are used with inside the centrifuge body tube the enclosed temperature sensors have about 15 minutes of thermal lag behind changes in air temperature outside the tube.
Try to get your students into the habit of doing a ‘fast burn’ check whenever the logger is about to be deployed: Set the logger to a 1-second interval and then run it tethered to the Uart for 20-30 seconds (with serial on) . Then restart the serial monitor window to download those few records to look at the data. This little test catches 90% of the code errors before deployment.
Important things to know:
Time: You need to start ordering parts at least three months ahead of time. If a part cost $1 or less, then order 5x as many of them as you think you need. Technical labs take a week to write, and another week for debugging. You can expect to spend at least an hour testing components before each lab. The actual amount of prep also depends on the capabilities of your student cohort, and years of remote classes during COVID lowered that bar a lot. Have several spare ‘known good’ loggers (that you built yourself) on hand to loan outso hardware issues don’t prevent students from progressing through the lab sequence while they trouble-shoot their own builds. Using multi-colored breadboards on those loaners makes them easy to identify later. Measuring logger sleep current with a DSO 138 scope or a Current Ranger will spot most hardware related problems early, but students don’t really get enough runtime in a single course to escape the bathtub curve of new part failures.
Yes, some of that student soldering will be pretty grim. But my first kick at the can all those years ago wasn’t much better and they improve rapidly with practice. As long as the intended electrical contact is made without sideways bridges, the logger will still operate.
Money: Navigating your schools purchasing system is probably an exercise in skill, luck and patience at the best of times. Think you can push through dozens of orders for cheap electronic modules from eBay or Amazon? Fuhgeddaboudit! We have covered more than half of the material costs out of pocket since the beginning of this adventure, and you’ll hear that same story from STEM instructors everywhere. If you can convince your school to get a class set of soldering irons, Panavise Jr. 201s, multimeters, and perhaps a 3D printer with some workshopsupplies, then you are doing great. Just be ready for the fact that all 3D printers require maintenance, and the reason we still use crappy Ender3V2’s is that there’s no part on them that can’t be replaced for less than $20. We bought nice multi-meters at the beginning of this adventure but they all got broken, or grew legs long before we got enough course runs with them. We now use cheap DT830’s and design the labs around it’s burden-voltage limitations. Small tools like 30-20AWG wire-strippers and side-snips should be considered consumables as few of them survive to see a second class. Cheap soldering irons can now be found for ~$5 (which is less than tip replacement on a Hakko!) and no matter which irons you get the students will run the tips dry frequently. The up side of designing a course around the minimum functional tools is that you can just give an entire set to any students who want to continue on their own after the course. That pays dividends later that are worth far more than one years budget.
An inexpensive BMP280 can be used as a temperature reference for the thermistor calibration lab. At a 1 minute interval the logger will run for 16 hours before the 4K EEprom is full. The logger should remain in each of the three water baths for 8 hours to stabilize. Stainless steel washers keep the logger submerged.
With insulated lunch box containers, the 0°C bath develops a nice rind of ice after being left in the freezer over night. No reference sensor is needed for the ice point because that is a physical constant.
All that probably sounds a bit grim, but the last thing we want is for instructors to bite off more than they can chew. Every stage in a new course project will take 2-3x longer than you initially think! So it’s a good idea to noodle with these loggers for a few months before you are ready to integrate them into your courses. Not because any of it is particularly difficult, but because it will take some time before you realize the many different ways this versatile tool can be used. Never try to teach a technical lab that you haven’t successfully done yourself a few times.
A good general approach to testing any DIY build is to check them on a doubling schedule: start with tethered tests reporting via the serial monitor, then initial stand-alone tests at 1,2,4 & 8 hours till you reach a successful overnight run. Follow this by downloads after 1,2 ,4 & 8 days. On the research side of the project, we do fast (seconds) sample-interval runs to full memory shutdown several times, over a couple of weeks of testing, before loggers are considered ready to deploy in the real world. Even then we deploy 2-3 loggers for each type of measurement to provide further guarantee of capturing the data. In addition to data integrity, smooth battery burn-down curves during these tests are an excellent predictor of logger reliability, but to use that information you need to be running several identical machines at the same time and start them all with the same fresh batteries so you can compare the graphs to each other. A summer climate station project with five to ten units running in your home or back yard is a great way to start and, if you do invest that time, it really is worth it.
Common coding errors like mishandled variables usually generate repeating patterns of errors in the data. Random processor freezing is usually hardware/timing related and the best way to spot the problematic code is to run with ‘logger progressed to Point A/B/C/etc’ comments printed to the serial monitor. In stand-alone mode you can turn on different indicator LED color patterns for different sections. Then when the processor locks up you can just look at the LEDs and know approximately where the problem occurred in your code.
Last Word:
The build lab at thebeginning of the course – with everybody still smiling because they still have no idea what they are in for. The course is offered to earth & environmental science students. Reviewing Arduino based publications shows that hydrologists & biologists are by far the largest users of Open Science Hardware in actual research – not engineers! Traditional data sources rarely provide sufficient spatiotemporal resolution to characterize the relationships between environments and the response of organisms.
Why did we spend ten years developing a DIY logger when the market is already heaving with IOT sensors transmitting to AI back-end servers? Because the only ‘learning system’ that matters to a field researcher is the one between your ears. Educational products using pre-written software are usually polished plug-and-play devices, but the last thing you want in higher education is something that black-boxes data acquisition to the point that learners are merely users. While companies boast that students can take readings without hassle and pay attention only to the essential concepts of an experiment, that has never been how things work in the real world. Trouble shooting by process-of-elimination, combined with modest repair skills often makes the difference between a fieldwork disaster and resounding success. So sanitized equipment that generates uncritically trusted numbers isn’t compatible with problem-based learning. Another contrast is the sense of ownership & accomplishment that becomes clear when you realize how many students gave their loggers names and displayed them proudly in their dorm after the course. That’s not something you can buy off a shelf.
And: We made a Classroom Starter Kit part list back in 2016 when we were still teaching with UNOs and a post of Ideas for your Arduino STEM Curriculum. Those posts are nowterribly out of date, but probably still worth a read to give you a sense of things you might want to think about when getting a new course off the ground. Those old lists also predate our adoption of 3D printing, so I will try post updated versions soon. The bottom line is that if you are going to own a 3D printer you should expect to have to completely tear down the print head and rebuild it once or twice per year. While they are the bottom of the market in terms of noise & speed, every possible repair for old Enders is cheap and relatively easy with all the how-to videos on YouTube. Those videos are your manual because Creality broke the cost barrier to mass market adoption by shipping products that were only 90% finished, with bad documentation, and then simply waiting for the open source hardware community to come up with the needed bug-fixes and performance improvements. X/Y models are more robust than bed-slingers, but our Ender 3v2’s have been reliable workhorses provided you stick to PLA filament. If you want a fast turn-key solution then your best option is one of the Bambu Labs printers like the A1 because they are essentially plug&play appliances. But if you enjoy tinkering as a way of learning, that’s Ender’s bread & butter. A bed level touch sensor is -by far- the best way to get that critical ‘first layer’ of your print to stick to the build plate and the E3V2 upgraded with a CRtouch & flashed to MriscoC is an OK beginners rig. Our local computer stores always seem to brand new 3v2’s for about $100 as a new customer deal but these days I buy cheap ‘for parts’ Ender5 S1’s from eBay for about that price and fix them up because 120mm/s is about the minimum speed I have the patience for. Now that multi-color machines are popular, what used to be high-end single-color machines are being discounted heavily so even brand new E5S1’s are selling for about $200 in 2025 if you look around. Having two printers means a jam doesn’t bring production to halt just before fieldwork, and by the time you get three printers the speed issues disappear and you can dedicate machines to different filaments. Keep in mind that no matter how fast your printer is capable of moving the print head, materials like TPU and PETG will still force you to print at much slower rates if you want clean prints with tight tolerances. To reduce noise, I usually put the printer onto a grey 16x16inch cement paver with some foam or squash ball feet under it.
Printer jams like this are inevitable. But if you don’t manage to fix it, a pre-assembled replacement hot end for an Ender3v2 is only $10. I find that it’s better to have multiple slower printers than one high-end machine because then a single jam or failure doesn’t take out all of your production capacity just before a fieldwork trip. Functional prints rarely have to be pretty – so you can speed up production with lower quality settings.
Avoid shiny or multi-color silk filaments for functional prints as they are generally more brittle and crack easily. Prints also get more brittle as they absorb humidity from the air. If that happens, cyanoacrylate glue + baking soda can be used for quick field repairs. It’s worth the extra buck or two to get filaments labeled as PLA pro as they usually have better durability in trade for slightly higher printing temperature (as long as the company was not lying about their formulation). I use a food dehydrator I bought for $10 from a local charity shop to dry out my PLA or PETG filaments if they have been open for more than a month. Really hygroscopic filaments (like PVA, TPU or Nylon) have to be dried overnight before every single print. Most machines work fine with the defaults, but you can get great prints out of any printer provided the bed is leveled, the e-steps & flow are calibrated, and the slicer settings are tuned for the filament you are using. If you are using Cura there is a plugin called AutoTowers Generator that makes specialtestprints. You print a set of those (for temperature, flow rate and then retraction) and set your slicer settings to match the place on the tower where the print looks best. You may have to do this for each brand of filament as they can have quite different properties. I rarely use filaments that cost more than $15/kg roll because functional prints don’t have to be that pretty and you will be tweaking each design at least 10 times before you iterate that new idea to perfection. I stock up on basic black/white/grey on Amazon sale days for only $10/roll. Once you get the hang of it, most designs can use 45° angles and sequential bridging to print without any supports. Conformal coating or a layer of clear nail polish prevents marker ink from bleeding into the print layers when you number your deployment stations.
Our Lux-to-PAR calibration post is a good example of the physical problems that have to be solved before you can collect good data: A printed shield was necessary for to keep the temp sensors cool under direct sun and a ball joint was required for leveling. A ground spike was needed for the mounting. I have started bumping up against the limits of what you can do in Tinkercad with some of our more complex designs, but OnShape is there waiting in the wings when I’m ready.
As a researcher, becoming functional with Tinkercad and 3D printing is a better investment of your time than learning how to design custom PCBs because it solves a much larger number of physical issues related to unique experiments and deployment conditions. That said, it’s not usually worth your time to design things from scratch if you can buy them off the shelf. So people get into 3D printing because what they need does not exist or is outrageously expensive for what it does. All FDM prints are porous to some extent because air bubbles get trapped between the lines when they are laid down – so they are not truly waterproof like injection molded parts unless you add extra treatment steps. UV-resistant clear coating sprays are a good idea for outdoor deployment. I don’t have much use for multi-color printing, but when they advance that to supporting multiple materials I will be interested because being able to print at TPU gasket inside a PETG housing would be quite useful.
If you are just getting started and looking for something to learn the ropes you can find useful add-ons to your printer on thingyverse or, you could try organizing your workspace with the Gridfinity system from Zack Freedman. You will also find lots of handy printable tools out there for your lab bench searching with the meta-engines like Yeggi or STLfinder. A final word of warning: this hobby always starts with one printer… then two… then… don’t tell your partner how much it cost.
Cr2032Internal Resistance vs mAh [Fig6 from SWRA349] Our peak load of ~8 mA while writing data to the EEprom creates a voltage drop across the battery IR. The load induced transient on the 3v Cr2032 can’t fall below 2.775v or the BOD halts the 328p processor. This limits our useable capacity to the region where battery IR is less than 30 ohms. This also makes it critical to control when different parts of the system are active to keep the peak currents low. With this relatively high cutoff we can only use about 100 milliampseconds of power from a typical CR2032.
Reviewers frequently ask us for estimates based on datasheet specifications but this project is constantly walking the line between technical precision and practical utility. The dodgy parts we’re using are likely out of spec from the start but that’s also what makes our 2-module data loggers cheap enough to deploy where you wouldn’t risk pro-level kit. And even when you do need to cross those t’s and dot those i’s you’ll discover that OEM test conditions are often proscribed to the point of being functionally irrelevant in real world applications. The simple question: “How much operating lifespan can you expect from a coin cell?” is difficult to answer because the capacity of lithium manganese dioxide button cells is nominalat best and wholly dependent on the characteristics of the load. CR2032’s only deliver 220mAh when the load is small: Maxell’s datasheet shows that a 300 ohm load, for a fraction of a second every 5 seconds, will drop the capacity by 25%. However, if the load falls below 3μA for long periods, then this also causes the battery to develop higher than normal internal resistance, reducing the capacity by more than 70%. The self-discharge rate increases with temperature due to electrolyte evaporating through the edge seal. Another challenge is ambient humidity which can change PCB leakage currents significantly.
Voltage Under EEprom load VS date [runtime hours in legend] with red LED on D13 driven HIGH for 1.4mA sleep current, 30second interval, 8-byte buffer. These are serial tests performed on the same logger. 1.4mA continuous is probably is not relevant to our duty cycle.
Surprisingly little is known about how a CR2032 discharges in applications where low μA level sleep currents are combined with frequent pulse-loads in the mA range; yet that’s exactly how a datalogger operates. As a general rule, testing and calibration should replicate your deployment conditions as closely as possible. However, normal run tests take so long to complete that you’ve advanced the code in the interim enough that the data is often stale by the time the test is complete. Another practical consideration is that down at 1-2μA: flux, finger prints, and even ambient humidity skew the results in ways that aren’t reproducible from one run to the next. So a second question is “How much can you accelerate your test and still have valid results?” Datasheets from Energiser, Duracell, Panasonic and Maxell reveal a common testing protocol using a 10-15kΩ load. So continuous discharges below 190μA shouldn’t drive you too far from the rated capacity. Unfortunately, that’s well below what you get with affordable battery testers, or videos on YouTube, so we are forced yet again to do our own empiricaltesting.
The easiest way to increase our base load is to leave the indicators on: all three LEDs will add ~80μA to the sleep current when lit using internal the pullup resistors. 80μA is ~16x our normal 5μA sleep current (including sensor draw & RTC temp conversions). A typical sampling interval for our work is 15min so changing that to 1 minute gives us a similarly increased number of EEprom save events. With both changes, we tested several brands to our 2775mv shut-down:
Cr2032 Voltage Under EEprom Load VS Date: Accelerated Cr2032 run tests with 3xLED lit with INPUT_PULLUP for ~80μA sleep current although each unit was slightly different as noted, 1min sampling interval. Blue Line = Average excluding Hua Dao. CLKPR reduced system clock to 1MHz during eeprom save on this test to reduce peak currents to about 6mA. All units had 227E 25V 220μF rail buffering caps.
Note: With the slight variation between each loggers measured sleep current, the times listed here have been adjusted to a nominal 80μA. Also note that the price/cell is highly dependant on vendor & quantity.
Despite part variations these batteries were far more consistent on that 20ohm plateau than I was expecting. This ~15x test gives us a projected runtime of more than two years! That’s twice the estimate generated by the Oregon Embedded calculator when we started building these loggers. We did get a 30% delta between the name brands, but these tests were not thermally controlled and we don’t know how old the batteries were before the test. The rise in voltage after that initial dip is probably the pulse loads removing the passivation layer that accumulates during storage. The curves are a bit chunky because the 328P’s internal vref trick has a resolution of only 11mv, and we index-compress that to one byte which results in only 16mv/bit in the logs.
One notable exception is the Hua Dao cells, which I tested because, at only 14¢ each, they are by far the cheapest batteries on Amazon. My guess they are so cheap because they used a lower grade of steel for the shell which increases the cell resistance. We have many different runs going at any one time, and to make those inter-comparable you need to start each test with a fresh cell. Even if the current run test doesn’t need a batteries full capacity sometimes you just need to eliminate that variable while debugging. You also use a lot of one-shot batteries for rapid burn-in tests so it makes sense for them to be as cheap as possible. Now that I know Hua Dao delivers half the lifespan of name brand cells, I can leverage that fact to run some of those tests more quickly. I had planned on doing this with smaller batteries but the Rayovac Cr2025 I tested ran for 1035 hours – longer than the Hua Dao Cr2032!
Cr2032’s used since January for bench testing.
Testing revealed another complicating factor when doing battery tests: With metal prices sky-rocketing, fake lithium batteries are becoming more of a problem. We’ve been using Sony Cr2032’s from the beginning of the project but the latest batch performed more like the Hua Dao batteries. This result was so unexpected that I dug through the bins for some old stock to find that the packaging looked slightly different:
Fake (left) vs Real (right)
On closer inspection it didn’t take long to spot the fraud:
Fake Sony Battery : laser engraved logo
Real Sony Battery: Embossed logo
More tests are under way so I’ll add those results to this post when they are complete. A couple of the 80μA units have been re-run after removing the 227E 25V220μF rail buffering caps, confirming that the tantalum does not extend overall run time on good batteries very much because their internal resistance rises slowly, but rail caps can more than double the lifespan with poor quality batteries like the Hua Dao. In my 20min tests at 3 volts, [477A] 10v 470μF caps eventually fell to a leakage of about 25nA which was not much more than the 15nA on a [227E] 25v 220μF tantalum. 6v volt [108J] 1000μF rail caps have much higher 980nA leakage which is larger than the DS3231’s typical constant Ibat of 840nA. This shortens overall logger runtime to only 6-7 months: so very large rail caps are only useful with high drain sensors or in cold conditions. It’s also worth mentioning that the spring contacts on those RTC modules are quite weak and may need a bit of heat shrink tubing behind them to strengthen the connection to the flat surface of the coin-cell. A small bead of hot glue on top also helps protect the battery from bump-disconnects.
Northern caves hold near 5-10°C all year round, so the current set is running in my refrigerator. I will follow that with hotter runs because both coin cell capacity AND self-discharge are temperature dependant. We also plan to start embedding these loggers inside rain gauges which will get baked under a tropical sun .
Addendum 2023-08-01
To avoid bump-disconnects in rough fieldwork conditions I sometimes add a bit of double-sided foam mounting tape behind the contact spring. This holds the battery well but is also a significant pain in the backside to remove at the end of the deployment. Usually that’s a good trade to guarantee the data.
This summers fieldwork required all of the units in my testing fleet so I only have a handful of results from the refrigerator burn down tests [@5°C]. The preliminary outcome is that, compared to the room temperature burns, the lithium cell plateau voltage lowers between 80-100mv (typically from 3000mv to 2900mv). Provided the loggers were reading a low drain sensor the ‘cold’ lifespan was only about 20% shorter because the normal 50-70mv battery droop (during sensor reading / eeprom save) only becomes important after the battery falls off its plateau. This is approximately the same lifespan reduction you see running at room temp without a rail buffering capacitor – as the buffer also only comes into play when the battery voltage is descending. This is also the reason why the larger 1000μF rail capacitors usually only provide about 10% longer life than the 220μF rail caps as the reduced battery droop with the larger cap only matters when the cell is nearing end of life. Net result is that increasing to 1000μF rail buffers almost exactly offsets the lifespan losses at colder ambient temps around 5°C . But for longer deployments, at normal room temps, the 1-2μA leakage of a 6v 1000μF [108j] tantalum removes any advantage over the 25v 220μF [227e] which has only 5nA leakage at 3 volts. Also note that some caps seem to need a few hours to ‘burn in’ before they are saturated enough to measure their leakage properly.
The freezer results are an entirely different situation where are only seeing a few days of accelerated 100μA sleep current operation because once you get below -15°C the coin cell plateaus below our 2800mv shutdown voltage. So the logger only operates for the brief span of time where the initial overvoltage on new Cr2032 is still above its nominal voltage. Even a 1000μF cap will not fix that problem so for truly cold weather deployments you’d need a different battery chemistry like lithium-thionyl chloride (LiSOCl2) which is good to -40°C provided you add a large rail buffering capacitor to compensate for it’s current limitations. Loggers drawing the a more normal 2-5μA sleep current run OK in a domestic freezer if the sensor is not too demanding. In fact we use waters 0°C phase transition as a physical reference when calibrating the onboard thermistors – and that’s done on a normal CR2032.
Addendum 2025-01-25
Finally getting the 2024 deployments swapped out, so I have some real-world burn down curves to add here. These units were in a U.S. cave rather than our typical Mexico field sites, so temps were cool enough to have some effect on the cell voltage:
11 months of [°C] temperature variation near the entrance of the cave, read by an NTC thermistor on the logger.
A new NightKonic Cr2032 coin cell was installed at the start of the deployment. This logger had a continuous sleep current of 1.2µA, but the RTC TXCO corrections bump that to >3µA average.
[°C] temperature variation is reduced as you move further into the cave (until eventually there is little of the outside annual climate signal left.)
Again a new NightKonic battery was used with this 1.6µA (continuous) sleeper. Lowest battery under load was saved once per day for these records.
With normal runtime operations the CR2032s plateaued near the nominal 3v with a 220µF rail buffering cap. It’s hard to tell if that slight shift indicates the end of the batteries 20ohm plateau or was just a response to the lower winter temperatures. Assuming that was the end of the plateau, then I’d expect another three (?) months of operation before the internal resistance pushes the voltage droop during EEprom saves to our 2800mv cutoff. Both were deployed with 1 gram of desiccant, which probably was not enough for the 30mL polypropylene tubes (given the color of the indicator beads when the loggers were downloaded). Unsaturated silica gel holds the air near 20%, but these loggers probably spent half of their deployment at high RH. I rarely put desiccant in my testing loggers and on the rare occasion when I forget one of them long enough for the Cr2032 to fall below 1000mv there is “a particular smell” that is quite noticeable when the logger is opened. That makes me wonder if electrolyte off-gassing also affects the color of the desiccant indicators beads.
The take home lesson is: when your accelerated testing indicates a two year lifespan under optimal conditions, you should expect to reach only half of that on a real-world deployment. If I assume a 3.5µA sleep current (ie: the DS3231’s timekeeping average incl. TXCO conversions) with four 3.5mA x 50msec logging events per hour, then the Oregon Embedded calculator gives me an 100mAh battery life of 958 days. Interestingly, the capacity of a 256k EEprom @ 15min interval storing 4bytes/record is also about 680 days. Taken together, these factors mean our loggers should only be expected to operate for a year – plus – a healthy 3-6 month margin if you used a good Cr2032 battery.
One unknown variable is the self-discharge rate of the Cr2032. I’ve seen references to a nominal 1% loss in battery capacity per year under ideal conditions, which would be like adding another 300nA of continuous load. But that loss can be up to 10x larger in high humidity environments.
Another variable that can have a serous impact on lifespan is the number of EEprom save events per day.In my tests a typical EEsave uses between 0.3-0.5 milliamp seconds of power, with sensor readings using a similar amount/record. (in comparison to a ‘good’ loggers sleep-current baseload of about 250-300 milliamp-seconds per day) So a few hundred EEsave events could potentially use power comparable to the sleep current each day. EEproms have to erase and re-write an entire page of memory any time you save a number of bytes less than the hardware page size. If you increase the two wire library buffers, and pre-buffer the sensor data into temporary arrays, you can increase the bytes per EEsave event from 16 to 64 (or even 128) to reduce the number of save events. When you transfer the same number of bytes as the EEproms hardwarepage size then the chip can skip any ‘pre-erase’ steps entirely – cutting ALL save events to 1/2 the time/power they would use saving a smaller number of bytes. This both increases speed and can extend operating lifespan significantly with multi sensor configurations collecting many bytes of data per record over short 1-5 minute sampling intervals.
Pressure testing has been on the to-do list for ages, but the rating on the PVC parts in our older housings meant we weren’t likely to have any issues. However, the new two-part mini-loggers fit inside a thin walled falcon tube, which raised the question of how to test them. There are a few hyperbaric test chambertutorials floating around the web, and we made use of one built from a scuba tank back at the start of the project, but I wanted something a less beefy, and easier to cobble together from hardware store parts. Fortunately Brian Davis, a fellow maker & educator, sent a photo of an old water filter housing he’d salvaged for use with projects that needed pressure tests. Residential water supply ranges from 45 to 80 psi so could replicate conditions down to 55m. That’s good for most of our deployments and certainly farther than I was expecting those little centrifuge tubes to go.
This mini pressure chamber was made from a Geekpure4.5″x10″ water filter housing, 2x male-male couplers, a garden tap, & a pressure gauge with a bicycle pump inlet. (~$70 for this combination) The relief valve & o-ring required silicone grease to maintain pressure.
I first tested 50mL ‘Nunc’ tubes from Thermo. These are spec’d to 14psi/1atm, but that’s a rating under tension from the inside. I put indicator desiccant into each tube so small/slow leaks would be easy to see and used a small bicycle pump to increase the pressure by 5psi per day. These tubes started failing at 25psi, with 100% failure just over 30psi. Multiple small stress fractures occurred before the final longitudinal crack which produced an audible ‘pop’ – often four or more hours after the last pressure increase. If 20psi is the max ‘safe’ depth for these tubes then the 50mL tubes can deployed to about 10m with some safety margin for tides, etc. This result matches our experience with these tubes as we often use them to gab water samples while diving.
[Click photos to enlarge]
As expected, the self-standing 30mL tubes proved significantly more resistant. All of them made it to 45psi and then progressed through various amounts of bending/cracking up to 100% failure at 55psi. Where the caps were reinforced (by JB weld potting a sensor module) the rim threads of the cap sometimes split before the tube itself collapsed:
Silicone grease was added to some of the caps although none of the dry ones leaked before the bodies cracked.
So the 30mL tubes have a deployment range to 25m with a good safety margin. The plastic of these tubes was somewhat more flexible with some crushing almost flat without leaks. This implies we might be able take these a little deeper with an internal reinforcement ring (?)
The next experiment was to try filling the tubes with mineral oil to see how much range extension that provides:
A third logger was submerged using only a sample bag:
The bag was included to test the ‘naked’ DS3231 & 328p chips. We’ve had IC sensors fail under pressure before (even when potted in epoxy ) Although it’s possible the encapsulation itself was converting the pressure into other torsional forces that wouldn’t have occurred if the pressure was equally distributed.
Again we moved in 5 psi increments up to 80 psi – which is the limit of what I can generate with my little bicycle pump. At 50psi some mineral oil seeped from the bag and at 70psi the ~1cm of air I’d left in the 50mL tube caused similar leakage. On future tests I will spend more time to get rid of all the bubbles before sealing the housings.
At 70psi the 50mL tube dented & sank and the lid started seeping oil (but did not crack)
The loggers continued blinking away for several days at 70, 75 & 80psi, but eventually curiosity got the best of me so I terminated the run. We were also getting uncomfortably close to the 90psi maximum test pressure on that polycarbonate filter housing. I was hoping to have some weird artifacts to spice up this post but no matter how hard I squint there really were no noticeable effects in the data at any of the pressure transitions – basically nothing interesting happened. I thought the resistive sensors would be affected but the RTC & NTC temperature logs have no divergence. The LDR looks exactly like a normal LDR record with no changes to the max/mins outside of normal variation. The battery curves are smooth and essentially indistinguishable from ‘dry’ bookshelf tests on the same cells. But I guess in this kind of experiment success is supposed to be boring… right? With mineral oil these little guys can go anywhere I can dive them to – even if the ‘housing’ is little more than a plastic bag.
One thing of note did happen after I removed the loggers from the chamber: I accidentally dropped the 30ml logger on the counter while retrieving it from the chamber and a thin white wisp of ‘something’ started swirling around the clear fluid inside the logger. This developed slowly and my first guess was that the capacitor had cracked and was leaking (?)
By the time I managed to capture this photo, the fine ‘smoke’ seen earlier had coalesced into a larger foam of decompression bubbles.
After emptying that oil, the logger itself went into a red D13 flashing BOD loop for a while but by the time I’d cleaned it up enough to check the rail, the battery had returned to it’s nominal 3v. My theory is a similar off-gassing event was happening inside the battery – briefly causing a droop below the 2.7v BOD threshold. So it’s possible that while the loggers are not depth limited per se using mineral oil, components like the separator in a battery may still be vulnerable to ‘rate-of-change’ damage. After more than two weeks at depth, I had vented the chamber in less than a minute. Of course when retrieving loggers in the real world I’d have to do my own safety stops, so this hazard may only affect loggers that get deployed/retrieved on a drop line.
I’ll run these loggers on the bookshelf for a while to see if any other odd behaviors develop. After that it will be interesting to see how well I can clean them in a bath of isopropyl (?) as I suspect that the mineral oil penetrated deep into those circuit board layers.
Addendum: 2023-05-30
Although the units sleep current was the same as before the pressure testing, the battery in the 30mL tube barely made another twelve hours on the bookshelf before the voltage dropped again – well before the expected remaining run time. So it’s a safe bet that any deployment which exposes coin cells to pressure at depth is a one-shot run. Given how cheap these batteries are, that’s pretty much a given when deploying these little loggers even if they remain dry.
Addendum: 2023-12-01
Short 30ml tubes work well for single-sensor applications, but classroom labs needed to switch between different sensor modules easily. So we added 3D printed rails holding mini breadboards to provide this flexibility, and the 50mL centrifuge tubes provide the space for these additions. They may not have the same depth range, but they are robust enough for most student experiments.
It’s also worth noting that these tests were done with the standard ‘plug-style’ caps that come with the NUNC 50ml centrifuge tubes. A few companies make tubes with an O-ring integrated into the cap(made of silicone or ethylene propylene) which gets compressed when the threads are tightened. Those would provide another layer of moisture resistance at that seal, although they wouldn’t do much to prevent the crush-failures. Unfortunately the Nalgene Oak Ridge high-speed polycarbonate centrifuge tubes that could resist those forces have necks pinched-in to a diameter too small for the modules in our logger.
A typical climate station from our project with other sensors protected from the direct sun under the bricks. Those loggers get checked carefully because scorpions are particularly fond of these brick stacks.
Most experiments require weather information to put environmental trends into context. So even though the majority of our sensor network is under ground, or under water, each study area includes a climate station on the surface. Our field sites are rarely close enough to government stations for their data to reflect local conditions because the official stations are spatially biased toward population centers and coastlines. As a result, we operate about ten weather stations and of the sensors they contain, tipping bucket rain gauges (TRGs) can be challenging to maintain at stations that only get serviced once or twice a year.
Where to spend your money
A fieldwork photo from early in the project when we were trying many different rain gauge designs. The aluminum funnels at the back are field repairs after the originals became brittle & cracked. Over time, this happens to all of our plastic funnel gauges. It’s worth noting that those aluminum funnels also corrode with organic acids from debris, but that takes 3-4 years instead of just months.
EVERYTHING exposed to full sunlight must be made of metal if you want it to last more than a year. I know there are plenty of tempting rain gauge designs at Thingiverse, but we’ve yet to see even hardened3Dprints stand up to tropical conditions. This is also true for Stevenson screens, where I’d recommend a stack of metal bowls on stainless threaded rods (like that used by the Freestation project) over most of the pre-made ones on the market. Local varmints love using climate stations as chew toys.
A typical station ready to deploy: Left: Hobo/Onset RG2 and right is the older 6″ Texas Electronics gauge it was based on. The separate loggers recording each TRG also record pressure & temp. The central logger records RH%, but RH sensors are so prone to failure that we no longer combine them with anything else. During installation, washers can be added for leveling where the gauges are bolted to the brick.
If you need one, then you actually need two. So long as you follow that first rule, it’s better to install two medium quality gauges rather than a single new one that eats your budget. When you’re replacing four to six gauges per year, lighter six inch diameter units are much easier to transport your the luggage. Be sure to have a receipt ready for import duty and even if you only paid $100 for that used gauge on eBay you can should expect an additional $100 getting it into another country. (and significantlymore for some shiny new gauge that doesn’t have any scratches or dents on it yet) Another reason to double up is that you can pack them into different suitcases. When the airline loses a bag – which happens more often than you’d expect – you still have at least one to deploy. Finally, if you install dual TRG’s from the start of your project, you then have the option of temporarily re-allocating to singles if a tropical hurricane destroys half your stations.
A low budget hack that you can maintain is better than an expensive commercial solution that you can’t. Avoid any system with special unobtainium batteries or connectors that you can’t buy/repair at your fieldwork destination. That sweet looking ultrasonic combo you were drooling over at AGU was probably engineered for the US agricultural market, and may not work at all in Costa Rica. If you do start testing acoustic or optical rain sensors, then have a second low tech backup on the station beside it. Most methods have some sort of ‘blind spot’ where they generate inaccurate data and the only way to spot that is to have data from a different device to compare. Reed switches also have the advantage that they require no power to operate.
A new gauge with a funnel full of standing water after only six months.
The debris snorkel plugged because it was designed for fine mid-west field dust, rather than the soggy leaf debris blowing around in a tropical storm. Pine needles cause similar headaches for researchers in northern climates.
Watch out for siphon mechanisms at the bottom of funnels designed to improve accuracy.
Anything that makes the flow path more convoluted will rapidly clog – so I cut them out.
Location, Location, Location
Installation guidelines for weather stations usually make assumptions that only apply only in wealthy first world countries. This is hardly surprising given that even mid-range kit will set you back $1,000 and pro-level equipment can top $10,000 when you include the wireless transmitters & tripod mounting system. But our research almost never happens under such genteel conditions, so here’s my take on some of those serving suggestions:
This station has never been disturbed.
A brick stack used to raise the funnels above the roof edge walls. These are bound with construction adhesive and industrial zip ties. Rooftop stations are still affected by high winds and falling branches, but just as often the disturbance is from maintenance people working on the water tanks, etc.
Place the weather station in an open area, free from obstructions such as trees or buildings, to ensure proper air flow and accurate wind measurements. So what do you do if those open areas only exist at all because someone cut down trees to build? And anemometer measurements are only possible if your kit can stand being hit by several tropical storms per year. Not to mention the amount of unwanted attention they draw. Wind data is one of the few things we rely on government & airport stations for.
Choose a location with a stable and reliable power supply, or consider alternative power sources such as solar panels or batteries. The expectation of reliable electricity / internet / cell phone reception is as humorous to a field scientist as the expectation of getting a hot shower every day. For more giggles, why not pop over to the next geo-sci conference in your area and ask them how long their solar powered station in Michigan ran before it was riddled with buckshot. Batteries are your only option, and the system should be able to run at least twice as long as your expected servicing schedule because things never go according to plan.
Locate the weather station in an area that is easily accessible for maintenance and repairs. Even in areas that regularly get pummeled by hurricanes, vandalism/theft is our biggest cause of data loss. Any equipment within reach of passers-by will be broken or missing within a couple of months – especially if it looks like a scientific instrument. So it’s worth a good hike through dense jungle to protect your data, even if that makes the station harder to access.
Choose a location away from any artificial sources of heat, such as buildings or parking lots. Rooftops are the only locations where we’ve managed to maintain long term stations because they are persistent, hidden from view, and the surrounding trees have been cleared. And in an urban environment…isn’t that, you know, the environment? Yes the thermal data is off because those rooftops go well over 45°C, but temperature is the easiest data to get from tiny little loggers that are more easily hidden at ground level.
Consult with local authorities and meteorological agencies to ensure that the location meets any necessary standards or regulations. A solid long-term relationship with the land owner, and your other local collaborators is vital for any research project, but don’t expect local authorities to make time for a friendly chat about your climate station. NGO’s are usually run by volunteers on shoe-string budgets so they’ll be grateful for any hard data you can provide. However, those same groups are often a thorn in the paw of the previously mentioned authorities. Threading that needle is even more complicated when some NGO’s are simply development blockers for large landowners waiting for property values to rise high enough for their own development project to become profitable. In addition to significant amounts of paperwork, public lands suffer from the problem that legislation & staff at the state/territory level can change dramatically between election cycles, sometimes to the point of banningresearch until the political wind starts blowing in a different direction.
Maintenance
The best maintenance advice is to have separate loggers dedicated to each sensor rather than accumulating your data on one ‘point of failure’ machine, especially when DIY loggers cost less than the sensors. We try to bring enough replacement parts that any site visit can turn into a full station rebuild if needed.
After six years in service I’m surprised this rooftop unit hasn’t been zapped by lightning.
Even with zip-tie bird spikes this gauge still accumulates significant poop each year. This passes through the main filter screen which stops only sticks, seeds & leaves. Chicken wire is another common solution to the bird roosting problem that’s easy to obtain locally.
Funnel & screen after the annual cleaning. This stainless steel kitchen sink strainer works far better than the commercial solutions we’ve tried because it has a large surface area with a section in the middle that rises above most of the debris. It is installed at a slight angle and held in place by wads of plumbers epoxy putty. This has become a standard addition to ALL of our rain gauges.
You’d think name brand gauge makers would use stainless steel parts – and you’d be wrong. Sand & coat those internal screw terminals with grease, conformal, nail polish, or even clear acrylic spray paint if that’s all you can find locally. This also applies to pipe clamp screws which will rust within one year even if the band itself is stainless.
Like bird spikes and debris snorkels, there are several commercialsolutions for calibrating your gauge but you can also find 3d printable models for constant flow Mariotte bottle rigs in the open source literature. In a pinch your can do a field check simply by poking a tiny pin-hole in a plastic milk jug or coke bottle filled with 1 litre of water from a graduated cylinder. Placing this on the funnel of a rain gauge gives a slow drip that generally takes about 30 minutes to feed through. The slower you run that annual test the better, and ideally you want an average from 3-5 runs. Of the many gauges we’ve picked up over the years, I have yet to find even new ones that aren’t under-reporting by 5-10% and it’s not unusual for an old gauge to under-report by 20-25%, relative to its original rating. Leveling your installation is always critical, but this can be difficult with pole mounted gauges. In those cases you must do your calibration after the gauge is affixed. I rarely move the adjustment stops on a gauge that’s been in place for a couple of years even if the count is off, because that’s less of a problem to deal with than accidentally shearing those old bolts with a wrench.
The Data
Rain gauges have large nonlinear underestimation errors that usually decrease with gauge resolution and increase with rainfall rate – especially the kind of quick cloud-burst events you see in the tropics. Working back from the maximum ranges, you’ll note that few manufacturers spec accuracy above two tips per second. So that’s a reasonable ‘rule of thumb’ limit for small gauges with plastic tippers that will plateau long before larger gauges with heavier metal tipping mechanisms. Gauge size is always a tradeoff between undercounting foggy drizzles at the low end (where smaller tippers are better) or undercounting high volume events (where larger gauges generally perform better). Even if you hit the sweet spot for your local climate, storms can be so variable that a perfectly sized & maintained gauge still won’t give you data with less than 15% error for reasons that have little to do with the gauge itself.
This adds 5-10 ms hardware de-bounce to the reed switch. Most gauges have switch closure times between 100 to 125ms with 1-2ms of bounce either side. After the FALLING trigger, sleep for ~120msec before re-enabling the interrupt. You can eliminate the 5k pull using the 25k internal pullup on D3, but your rise time increases from 10ms to 25ms and the resulting divider only drops to 15% of Vcc rather than zero.
All that’s to say your analysis should never depend on rainfall the way you might rely on temperature or barometric data. More records, from more locations, always gives you a better understanding of site conditions than ‘accurate’ data from a single location. Of course, that gives you the “man with two watches” problem when one of the gauges is in the process of failing. The most difficult situation to parse is where something slowly plugs one of the funnels but both gauges are still delivering plausible data. A signature of this kind of fail is that one gauge of a pair starts delivering consistent tip rates per hour during events while the other gauge shows larger variation. An alarm bell should go off in your head whenever you see flattened areas on a graph of environmental data:
Wasps & termites are particularly fond of rain gauges because they naturally seek shelter underneath them – where the drain holes are.
Daily Rainfall (mm) record from the gold funnel TRG at the top of this post showing before (green) & after (red) the storm that clogged the filter. Failure is indicated by prolonged curving descents followed by a long tail of low counts as the trapped water slowly seeps through the blockage. Normal rainfall data looks spikey because it can vary dramatically in as little as 15 minutes with long strings of zeros after each rain event.
Did I mention snakes? Yep, they love our climate stations. My guess is they go in after residual water left in the tipper mechanism.
These problems are much easier to sort out if both of the gauges at a given station are calibrated to the same amount of rainfall per tip (usually 0.01inches or 0.2mm) and disappear entirely if you have three records to compare.
While I’ve been critical of the cheap plastic tippers you find in home weather station kits they still have a place for budget EDU labs, and I’ve more than a few in my back garden feeding data into prototypes for code development. A new crop of metal & plastic hybrid gauges have started appearing on Amazon/eBay for about $150. The build quality seems a bit dubious, but we are going to give them a try this year anyway to see if they can serve as backups to the backups. As they say in the army: “Quantity has a quality all it’s own”. I wonder if any citizen science projects out there could adopt that motto?
Addendum: 2023-04-17
As luck would have it, that cheep Chinese gauge arrived from Amazon the day after I made this post. I wasn’t expecting much from a $150 rain gauge, but this one turned out be such an odd duck that I’ll include it here as a warning to others. On the right you see a photo from the listing, which made me think both the body and the funnel were made from brushed metal. What actually arrived made it clear the whole listing was carefully crafted to hide some pretty serious designs flaws.
Another dented delivery though, to be fair, the metal is tissue paper thin. At least this one didn’t get stolen from the front porch. Aluminum spray paint was used to disguise the crappy plastic funnel in the listing photos.
You could snap any part of this mechanism with finger pressure. And I wouldn’t take bets on how waterproof that junction box is either. There were no photos of this mechanism in the listing, which should have stopped me right there.
The thing that makes this such a good example of bad engineering is that they first optimized production cost with cheap brittle plastic that will likely fail with a year. As a result, the tipper ended up so light that they had to add a second funnel & tipping mechanism to deal with the momentum of drops falling from the main funnel. That upper mechanism is so small it’s guaranteed to plug up with the slightest amount of debris – causing the unit to stop working even before the plastic starts to crack. If they had simply added that extra material to a larger, heavier bottom tipper the upper mechanism wouldn’t have been necessary.
What the heck?
What takes this from merely bad to actually funny was the inclusion of an “Intelligent rainfall monitoring system for data upload via Ethernet, GPRS and RS485”. I presume that was intended to connect with ‘industry standard’ meteorological stations but who’d tack a cheap sensor like this onto one of those $1000+ loggers? Even stranger to me is the idea you’d waste that much power on a simple reed switch. Fortunately there is a terminal block where you can bypass that all that baggage, though that’s also fragile to the point of single-use.
In the current political environment, the last thing I’d do is put something like this on my ethernet.
Bottom line is that you are better off buying a used unit from a quality manufacturer than you are getting a new one from a company that doesn’t have a clue what they are doing. For comparison, here’s how the mechanisms inside decent gauges look:
While the tipper inside a Texas/Onset gauge is made of plastic it is extremely tough. The needle point pivots are hardened and we’ve yet to see one fail. The magnets however do rust, but like the reed switch they are easily replaced with a bit of CA glue and baking soda. Magnets have fallen from tippers on gauges from several different companies because differential expansion in tropical heat cracks the epoxy.
This High Sierra was built like a Russian tank and you see similarly rugged components inside older gauges from brands like Vaisala. After retiring the other components of gauge, we repurpose these virtually indestructible innards for drip monitoring in caves. You can 3d print a new base, or mould feet from nylon bolts and plumbers putty.
We’ve been deploying our loggers under water since 2013 and although we’ve posted many detailed build tutorials along the way, it’s time to gather some of that distributed material into a summary of the techniques we use. This post will focus on options available with a modest budget and also include a few interesting methods we haven’t tried yet for reference. To put all this in context; we deploy our DIY loggers to typical sport diving depths and usually get solid multi-year operation from our underwater units.
Arielle Ginsberg examines the sponges covering a flow sensor deployed in a coastal outflow canyon.
Sealants
No matter what coating you use, everything must be scrupulously clean before it’s applied. Corrosion inducing flux is hydroscopic and there’s always some left hiding underneath those SMD parts – especially on cheap eBay modules. That means scrubbing those boards with alcohol and an old toothbrush, drying them with hot air & cotton swabs, and then handling by the edges afterward. Boards with only solid-state parts (like the ProMini) can be cleaned using an ultrasonic cleaner and 90% isopropyl but NEVER subject MEMS sensors or RTC chips to those vibrations. Polymer based RH sensors like the BME280, or MS5803 pressure sensors with those delicate gel-caps, also get careful treatment. After cleaning, let components dry overnight in a warm place before you coat them with conformal. I clean new modules as soon as they arrive, and store them in sealed containers with desiccant.
This $25 jewelry cleaner gets warm during the 5 -10min it takes to get the worst parts clean so I run this outside to avoid the vapours.
MG Chemicals 422-B Silicone Modified Conformal Coating is the one we’ve used most over the years. Even with a clean board, adhesion to raised ICs can be tricky as surface tension pulls it away from sharp edges. Like most conformals, 422-B fluoresces under UV-A so a hand-held blacklight lets you check if it’s thin at some corner, or if you simply missed a spot. The RC/Drone crowd regularly report on many of the other options on the market like Corrosion-X, Neverwet, KotKing, etc. I’ve never seen a head-to-head test of how well the different conformals stand up over time, but the loggers we’ve retired after 5-6 years in service look pretty clean even though silicone coatings are not water vapour proof. I like the flow characteristics of 422 for our small scale application, though the vapours are nasty enough to make you wonder how much brain damage your project is really worth. You can also just burn the stuff off with a soldering iron if you need to go back for quick modification after its been applied. Conformals can be made from other compounds like acrylic or urethane, and at the top of the market you have vacuum-deposited coatings like Parylene.
Nail polish gets mentioned frequently in the forums and it’s usually a type of nitrocellulose lacquer. While it’s non-conductive and non-corrosive, acetate chemistry is not far off acetone which solvates a lot of stuff. So nail polish may soften some plastics and/or the varnish protecting your PCBs. It might also wipe the lettering off some boards. So the trick is to start with the thinnest layer possible and let that harden completely before applying further coats. Nail polish softens somewhat when heated above 200°C with a hot air gun enabling you to scrape it away if you need to rework something after covering. Overall it’s a good low-budget option that’s less complicated to apply than a UV cured solder mask solution.
One of our many early failures before we decided to use only transparent epoxies. The outer surface of this epoxy was intact; giving no hint of what was happening below.
Some epoxies permit slow water vapour migration leading to corrosion at points with leftover flux. Like the white example above, this potting was still OK at the surface. Both of these two failures pre-date our use of conformal oneverything.
You never get 100% coverage so the areas underneath components usually remain unprotected. But coatings really shine as a second line of defence that keeps your logger going when the primary housing suffers minor condensation or makes the unit recoverable after a battery leak. Even when we intend to pot a circuit completely, I still give it a thin coat of conformal to protect it during the week long burn-in test before encapsulation. (If you are using cheap sensors from eBay, expect ~20% infant mortality) Be careful not to let coatings wick onto metal contacts like those inside an SD card module or USB connector and remember to seal the cut edges of that PCB so water can’t creep between the layers.
The delicacy of application required when working with IC sensors means that spray-on coatings are usually a bad idea, but there are exceptions. Paul over at Hackaday reports success using clear acrylic spray paint as a kind of poor man’s Parylene after “comparing the MSDS sheets for ‘real’ acrylic conformal spray coatings, and acrylic paint. All that’s missing is the UV indicator, and the price tag.” He uses this technique in outdoor electrical boxes but the first thing that comes to my mind is coating the screw terminals inside most rain gauges (see photo at end of post), and the exposed bus-bars you see in some climate stations.
Potting / Encapsulation
Hot glue is a quick way to seal one side of pass-through so you can pour liquid epoxy on the other.
Hot-melt Glue: Glue sticks come in a variety of different compounds. But it’s hard to know what’s in the stuff at your local hardware store so my rule of thumb is to just buy the one with a higher melting point. If you are gluing to something with a high thermal mass or a surface that can transfer heat (like copper PC board) the glue will freeze before it bonds. So preheating the item you are working on with a hot air gun before gluing is usually a good idea. I’ve used glue sticks for rough prototypes more times than I can remember, sometimes getting several months out of them before failure in outdoor locations. Cheaper no-name sticks tend to absorb a lot of water(?) and have more trouble sticking to PCB surface coatings. So it’s a temporary solution at best unless you combine it with something more resistant like heat shrink tubing. Add glue to what you’re sleeving, and it will melt and flow when you shrink – effectively a DIY adhesive lined heatshrink:
Here I used leather gloves to squeeze the hot-melt glue inside adhesive lined heat-shrink until it covered the circuit without bubbles. This one lasted ~8 months and then we switched to epoxy fills.
Hot glue is also quite handy for internal stand-offs or just holding parts together if they are too irregularly shaped for double-sided mounting tape to do the job. Isopropyl alcohol helps remove the glue if you need to start over.
Superglue & Baking Soda: These dollar-store items are perfect for sealing & repairing the polymer materials that most waterproof kit is made from. Adam Savage has a great demo of this on YouTube. That gusseting build-up technique is so fast it now accomplishes many of the things I used to do with hot glue. CA glue & spray-on accelerant can also be used to improve the strength of 3D prints, as demonstrated by the ever-mirthful Robert Murray-Smith. The sealed surface of your print can then be written on with a sharpie marker without the black ink bleeding into the PLA layers, although I also use clear mat-finish nail polish for this labeling.
At this scale the viscosity of your encapsulating material is as important as any vapours it might give off. To avoid wicking problems, a ring of ‘dry’ plumbers putty can secure a filter cap over the sensor after the liquid potting compound sets.
Silicone Rubber comes in two basic types: ‘Acid cure’ which smells like vinegar and ‘Neutral cure’ which gives off alcohol while it hardens (often used in fish-tank sealants). Never use acid curing silicone on your projects. Hackaday highlighted a method using Tegderm patches to give silicone encapsulations a professional appearance although you can usually smooth things well enough with a finger dipped in dish detergent. In another Hackaday post on the subject, a commenter recommends avoiding tin-cured RTV silicones in favor of platinum cured which has longer lifespan and less shrinkage. Really thick silicone can take several days to cure but accelerants like corn-starch or reptile calcium powder can cut that to a few hours. It’s also worth knowing that silicones expand/contract significantly with temperature because this can mess with builds using pressure or strain sensors.
The $5 3440 Plano Box housings we use on the classroom loggers stand up to the elements well enough in summer months, but rarely have an adequate seal for the temperature swings in fall or winter. Judging by this post over at AVRfreaks, this is a common issue with most of the premade IP68 rated housings on Ebay/Amazon.
While silicone is waterproof enough for the duration of a dive it is NOT water-vapor proof. I often use GE Silicone II (or kafuter K-705) to seal around the M12 cable glands we use on student projects. However, water vapor eventually gets in when the housings “cool down & suck in moist air” causing condensation on the upper surface. Any container sealed with SR will eventually have an internal relative humidity comparable to the outside air unless your desiccants prevent that from happening. Always use desiccants with color indicator beads so you can see when they need to be replaced. Silica gel desiccant beads bring the air above them down to about 20% RH in 24-48 hours but only if there is enough mass for your volume. The best way to determine how much your build needs is to do test runs logging an RH sensor like the BME280 inside the box with different amounts of desiccant. Old desiccant pouches can be ‘recharged’ overnight in a food dehydrator and used ones can usually be found for ~$10 at your local thrift shop. Dehydrators are also great for reviving old filament if you have a 3d printer.
Liquid Epoxy: If money is no object, then there are industrial options like Scotchcast but many come in packaging that dispenses volumes far too large for a small batch of loggers. The best solution we could find at the start of this project was Loctite’s line of 50mL 2-part epoxies designed for a hand-operated applicator gun. Used guns can be found on eBay and there are plenty of bulk suppliers for the 21-baffle mixing nozzles at 50¢each or less. Loctite E-30CL has performed well over years of salt-water exposure on our PVC housings though it does fog & yellow significantly after about six months. Check the expiry datebefore buying any epoxy because they harden inside the tube when they get old. I’ve often received epoxies from Amazon that are only a month or two from expiring, so don’t buy too much at one time. And they don’t last long once you crack the seal, so I usually line up several builds to use the entire tube in one session.
A background layer of black EA E-60NC potting compound was used to improve the visual contrast. Once that set aclear acrylic disk was locked into place over the OLED with E-30CL epoxy – taking care to avoid bubbles. The acrylic does not yellow like the epoxy and can be thick enough to protect relatively delicate screens from pressures at depth.
A short piece of adhesive lined heat shrink seals one end of the clear tube to the cable. Epoxy is added to fill about 1/3 the volume. Then gentle heating shrinks the clear tube from the bottom up until the epoxy just reaches the top. Another adhesive lined ring seals the epoxy at the top of the tube. Then gentle heating of the clear heatshrink contracts it into a smooth cylinder. Extra rings are added to strengthen the ends.
We’ve deployed up to 24 DS18b20 sensors on a single logger running underwater for years – failing eventually when the wires broke inside intactcable jackets because of the bending they received over several deployments. This mounting takes a bit of practice, so have a roll of paper towels nearby before you start pouring and I usually do this over a large garbage can to catch any accidental overflow.
This image shows the typical appearance of E30CL after several months in seawater. The brown dot is a marine organism that bored into the epoxy, but they have never tried to drill through the housing itself… which says something about the toxicity of polyvinyl chloride.
The 2-Part fiberglass resins used for boat repair are another good potting option though they are often opaque with unusual coloration. Low viscosity mixes can be applied with precision using disposable syringes. It’s important that you transfer the stirred resin into a second container before pulling it into the syringe because there’s often a poorly mixed layer stuck to the sides of the first mixing cup. 3D printed shells are often used as casting molds but if all you just need is a rectangular shape then I’d use a LEGOframe lined with plastic food wrap. You can make single-use molds that conform to unusual shaped objects with sheets of modeling clay. When encapsulating large volumes you can make that expensive epoxy go farther with ‘micro-balloon’ fillers made from tiny phenolic or glass spheres. I’ve used old desiccant beads for this many times. Other inert fillers like talc power are sometimes used the lower peak temps during the curing process because fast setting epoxies get quite hot – sometimes too hot to touch. And speaking of heat, all encapsulation methods open the possibility that high power components could cook themselves. So avoid covering any heat sinks when you pot your boards.
Filler / Paste Epoxies: J-B weld is good low-budget option for exposed sensor boards. This two part urethane adhesive bonds well to most plastic surfaces and the filler it carries gives a working consistency somewhere between peanut-butter and thick honey. This is helpful in situations where you want to mount something onto a relatively flat surface like the falcon tubes we use with our 2-part Mini Loggers:
This BMP280 module already has a coating of conformal.
Shift the epoxy to the edges of the sensor with a toothpick
Although the original grey formulation gets it’s color from metal filings it is an electrical insulator. The older style JB weld that comes in two separate tubes is slightly thicker than that sold with an applicator syringe. It’s also worth noting that the stuff really needs at least 24 hours to set – not the 6 hours they claim on the package. There is also a clear version that can be used to protect light sensors, but I’ve yet to field test that in harsh enough conditions to see how it ages:
JB can also be used to secure delicate solder connections.
PTFE tape is a good diffuser if light levels get to high.
A JB-weld coated DS18b20 after 6 months in the ocean. Specks of iron-particle rust can be seen, but when I broke away the coating the can underneath was still clean & shiny.
Wax: I haven’t tried this yet but it sounds like it could be fun: Refined paraffin can be purchased in food grade blocks for sealing jars, etc. at most grocery stores and it flows well into small component gaps. It’s also removeable, however the 45°C melting point which makes this possible is too low for outside deployments where I’ve seen loggers reach 65°C under tropical sun. A tougher machinable-wax can be made at home by mixing LDPE (plastic grocery bags) or HDPE (food containers) into an old deep fryer full of paraffin wax. The general recipe is a 4:1 ratio of paraffin to LDPE/HDPE and this raises the melting point enough to withstand summertime heat. Or you could try Carnauba wax which has a melting point above 80°C. You probably want to do partial pours with any wax based approach as shrinkage can be significant. If I had to make something even more heat resistant I’d consider an asphalt-based roofing cement. That’s a one-way trip, but it should last quite a while outside.
If you’re spending company money, it’s worth noting that many professionalpotting compounds like those from 3M are sold in hot-melt glue stick formats [usually 5/8″(16mm) diameter rather than the more common hobby market 1/2″]. This dramatically reduces waste & mess compared to working with liquid epoxies. Of course, it’s unlikely a DIYer will be able to use them as the applicators alone can set you back $300 to $600 USD. Another factor to consider is the different expansion rates of the circuit you are trying to protect vs the compound you are using for the encapsulation: hard epoxies may cause electrical failures by subjecting components to more stress when the environment is cycled between extreme temperatures. In those cases it is probably better to use softer compounds.
Housings & Connectors
Although 3D printers are now affordable, we still use plumbing for our underwater housings so that others can replicate them with parts from their local hardware store. The design has changed significantly over time but this tutorial video from 2017 still stands as the best overall description of the ‘potting wells’ method we use to mount sensors on those PVC housings. It also shows how to make robust underwater connectors using PEX swivel adapters:
Smooth surfaces on the inside of those wells are scored with a wire brush or rough grit sandpaper before pouring the epoxy. After solvent welding, leave the shells to set overnight before adding epoxy because bad things happen when you mix chemistries. In fact, that’s a good rule for all of things listed in this post. Otherwise that expensive potting compound could turn into a useless rubbery mess. Another important thing to note is that we break the incoming wires with a solder joint that gets encapsulated before the housing penetration. This is more reliable than cable glands because water can’t wick along the wires if the jacket gets compromised. The shell shown in that video uses a Fernco Qwik-Cap as the bottom half of the housing and quite a few Qwik-cap housings have survived years under water although the flexing of that soft polymer limits them to shallower deployments. So these wide-body units get used primarily for drip loggers & surface climate stations. It’s worth noting that water vapour slowlymigrates through the plasticknockout cap on the upper surface of our drip counters. So they require fresh desiccants once a year even though the logger could run much longer than that. A reminder that over the time scales needed for environmental monitoring, many materials one thinks of as ‘waterproof’ are not necessarily vapour proof.
For underwater deployments we developed a more compact screw-terminal build that would fit vertically into a 2″ cylindrical body. After many struggles with salt water corrosion we gave up on ‘marine grade’ stainless steel and started using nylon bolts to compress the O-ring. But these need to be tightened aggressively as nylon expands in salt water (we usually pre-soak the bolts overnight in a glass of water before sealing). Nylon expansion has also caused problems with the thick 250lb ties we use to anchor the loggers. In a high humidity environments, cheap nylon zip ties become brittle and break, while expensive industrial ties stretch and become loose. We’re still looking for better options but when you are working under water, you need something that can be deployed quickly.
We’ve tried many different epoxy / mounting combinations on the upper cap of those housings, but with the exception of display screens we stopped using the larger wells for underwater units because the wide flat disk of epoxy flexes too much under pressure. This torsion killed several sensor ICs on deployments below 10m even though the structure remained water-tight.
As our codebase (and my soldering skills) improved we were able to run with fewer batteries – so the loggers became progressively smaller over time. Current housings are made from only two Formufit table leg caps and ~5cm of tubing. The same swivel adapter used in our underwater connector now joins sensor dongles to the housing via threaded plugs. Sensor combinations can be changed easily via the Deans micro connectors we’ve used since the beginning of the project. Though the photo shows two stacked o’rings, we now use shorter bolts and only one. See this post for more details on the construction of this housing.
EPDM O-rings lose much of their elasticity after a couple of years compressed at 20-25m, so for deeper deployments I’d suggest using a more resilient compound. And there are now pre-made metal housing options in the ROV space that didn’t exist at the start of this project. With the dramatic size reduction in recent models, you occasionally find a good deal on older Delrin dive-light housings on eBay. Another interesting option is household water filter housings made from clear acrylic. They were too bulky for our diving installations, but this Sensor Network project at UC Berkeley illustrates their use as surface drifters.
Other Protection Methods
Mineral oil: PC nerds have been overclocking in tanks of mineral oil for ages, so it’s safe at micro-controller voltages. It’s also used inside ROV’s with a flexible diaphragm to compensate for changes in volume under pressure. Usually a short length of Tygon tubing gets filled with oil and stuck out into the water, or the tube can be filled with water and penetrates into the oil-filled housing. We use a similar idea to protect our pressure sensors from salt water:
The MS5803 pressure sensor is epoxied into a 1/2″-3/4″ male PEX adapter and a nitrile finger cot is inserted into the stem of a matching swivel adapter.
The sensor side gets filled to the brim with mineral oil
The two pieces are brought together
Then tighten the compression nut and use a lubricated cotton swab to gently check that the membrane can move freely.
Moving those membrane-protected sensors onto a remote dongle makes it much easier to recover the sensor after a unit gets encrusted with critters. Oil mounts have worked so well protecting those delicate MS58 gel-caps that I’ve now started using this method with regular barometric sensors like the BMP280. This adds thermal lag but there’s no induced offset in the pressure readings provided there’s enough slack in the membrane. Silicone oil is another option, and I’ve been wondering about adding dye so that it’s easier to spot when those membranes eventually fail. I avoid immersing any components with paper elements, like some old electrolytic capacitors, or parts that have holes for venting.
Bio-fouling on one of our loggers deployed in an estuary river. We only got three months of data before the sensor was occluded.
We remove calcareous accretions by letting the housings sit for a few hours in a bucket of dilute muriatic acid. Many of our loggers get this treatment every season.
Cable Protection: For the most part this comes down to either strain relief, or repairing cuts in the cable jacket. Air curing rubbers like Sugru are fantastic for shoring up worn cables where they emerge from a housing though I usually use plumbers epoxy putty for that because I always have it on hand for the housing construction. Sugru is far less effective at repairing cables than something that’s cheaper but less well known: self-fusing rubber electrical sealing tape (often called ‘mastic’ or ‘splicing’ tape). This stuff costs about $5 a roll and has no adhesive: when you wind it around something it sticksto itself so aggressively that it can not be unstuck afterward, yet remains flexible in all directions. This makes it perfect for repairs in the middle of a cable and we’ve seen it last months under water though it quickly becomes brittle under direct sun. And it does the job in places you can’t reach with adhesive lined heat shrink. I usually slap a coat of plasti-dip or liquid electrical tape over top of those repairs. This improves the edge seal and makes the patch look better. Self-fusing tape is also great for bulking out cables that are too thin for an existing cable gland, or combining several wires into a water-tight round-profile bundle for a single gland.
However the best advice I can give is to simply avoid the temptation of soft silicone jacket cables in the first place. Yes, they handle like a dream under water, but you will pay for it in the long run with accidental cuts and hidden wire breaks due to all that flexing. Another hidden gotcha is that silicone compresses at depth which brings the wires closer together – potentially increasing the capacitance of a long bus enough to interfere with sensor handshakes. Our go-to after many years at the game is harder polyurethane jacketed cables (like the ones Omega uses for their thermistors) It’s a pain in the arse to strip & solder, but you can pretty much drive a truck over it. And somehow that kind of thing always happens at least once during a field season.
Lost count of how many times ants/wasps have bunged up our rain gauges. And I should have coated those screws…
Double housings: Instead of sealing the housing to block out humidity, control the point where it condenses by surrounding an inner plastic housing with a second outer shell made of aluminum. Then let everything breathe naturally with the idea that condensation will happen first on the faster cooling aluminum, thereby protecting the inner components. I’ve heard of this being used for larger commercial monitoring stations but I’ve never been brave enough to try it myself. You want some kind of breathable fabric membrane over any vent holes to keep out dust (to IP6) and especially insects because if there’s a way into your housing they will find it and move in. Another simple but related trick is to fill any void spaces inside your housing with blocks of styrofoam: this minimizes the total volume of air exchanged when the temperature swings.
Addendum: Testing Underwater Housings
People reading this post might also be interested in the DIY pressure chamber which we’ve been using to test our little falcon tube loggers. It’s made from a household water filter canister, with a total cost of about $70usd. The domestic water pressure range of 40-80psi overlaps nicely with sport diving depths. The 30mL tubes are stronger for single sensor builds, but the 50mL tube provide more space for our 2-Module classroom data logger. This model uses two mini breadboards for convenient sensor swaps.
Addendum: 3D printed housings
PLA (or poly-lactic acid) is made from glucose that is converted to lactic acid with an H2O molecule removed to trigger the polymerization process. While water doesn’t degrade most printable polymers PLA slowly gets brittle when wet because it recaptures the H2O. But there is an energy barrier that requires the right temperature, pH, or UV conditions. Bacteria can also accelerate that chemistry and just anecdotally we’ve seen biofilms grow much faster on prints deployed to wet environments compared to our PVC housings. But it’s impossible to distinguish if it’s the polymer or some other lubricant/additive in the mix that they are attracted to. We’ve also seen ‘compression/tension’ mechanisms in PLA fail because of reduced strength even though there was no visual indications that the parts had degraded. This has motivated of our increasing conversion over to PETG for installations although I still use PLA while a new design is being developed because it prints faster. Most polymers are about 1.25 grams per cubic centimeter, so around 85% infill gives neutral buoyancy in water. Nylon can absorb a great deal of water and swell making submerged prints unusable.
The first thing to do when printing a housing is to make sure your filament is bone dry before starting, but all 3D prints are still going to be porous to some extent. There’s an interesting article on 3-d printed underwater housings over at the Prusa Research blog. The initial strategy is to increase the flow rate and temperature so less air gets trapped between the polymer lines that get laid down by the nozzle. Then I’d try a coating of CA glue with spray on accelerant to seal the outer surfaces. Or you could switch to SLA resin printing like RCtestflight – even then they still filled the space around their servos with silicone grease. Our little loggers work fine immersed in mineral oil which is relatively easy to clean up later with alcohol. Ironing produces smooth flat layers that you would think are more water resistant, but this Reddit contributor found that once bubbles are in the print, they can not be removed by ironing. ‘Brick layers’ would help reduce the air gaps but unfortunately that technique is tied up with patent issues. CPS drone had excellent results making prints waterproof by treating them with Dichtol AM Hydro which is very low-viscosity specifically for impregnation and sealing. This makes me think about testing the many waterproofing sealers at my local hardware store. Multiple thin coats of an epoxy should create an external seal if they get sanded between coats, but the underlying print would have to be strong enough to resist deforming & cracking. CPS also created interesting epoxy & print combination endcaps. Pass throughs are critical weak points in any design because water under pressure can rapidly wick between multi-strand wires if the outer insulation gets cut. Stripping to raw copper and embedding in the epoxy is usually required and you need to add soldered break-point or even a solid wire bridge through the housing because there is no way to get the epoxy to fill all the spaces between the strands without a vacuum chamber. Gyroid fills form continuous tubes that you can pour something like glass fiber-filled epoxy into for strength but heat from the curing resin can cause deformation. Re-melting the prints with powdered salt also seems to work for both water and gas hardening. Even if you eventually get all the bubbles out, the polymers themselves still have a vapour penetration rate. This is an issue in labs where you can’t do isotopic analysis on samples that have been left in poly centrifuge tubes too long. And it’s not unusual for 15ml Eppendorf’s to loose 1/3 of their volume in 8 months even if they are sealed well. If you have the budget for SLA printing, Formlabs have posted an interesting design where the o-ring seals are directly printed into the parts and the enclosure is sealed using a hand-screwed bezel. This is much like the seal you find in PVC plumbing parts like non-glued unions.
Slant3d demonstrated an interesting idea for enclosureboxes that keep water out without a seal although in our project condensation of ambient humidity is equally damaging in the long term. 3D printing is also handy for making angular connection joints for complex protective structures or custom sensor mounts that have to last underwater without rusting. But sometimes in the field you are better off using plumbers putty for those PVC tripods. I’ve lost count of how many times I jury-rigged something on site with plumbers putty that then lasted for years of outdoor exposure.
This ‘two-part’ logger fits nicely inside a falcon tube which can be deployed to 10m water depth. With a bit of practice, complete logger assembly can be done in about 30-60 minutes depending on additions. The 4K EEprom on the RTC board will hold 4096 1-byte RTC temperature readings (~ 40 days worth @ 15 min. intervals) and that’s easily extended with I2C memory chips or modules.
The ‘solderless’ EDU build we released in 2020 provides remarkable flexibility for courses in environmental monitoring. However an instructor still needs to invest about five days ordering parts, testing components, and preparing kits for course being run remotely. (Although only 1/2 that is needed for in-person courses where the students pin & test the parts themselves.) While that’s not unusual for university lab subjects it’s a stretch for other instructors. And thanks to COVID shortages, modules that were only a buck at the beginning of the project might now set you back $5 each. So with all that in mind, we’ve continued development of a ‘lite’ version of our logger with the lowest possible prep. That new baby is now ready for release with data download & control done through the serial monitor window.
Instead of connecting the red channel of the 5mm LEDI use the onboard D13 LED for red. Other libraries and programs often require that LED.
With just three core components our only option was to remove the SD card. Groundwork for this change has been in place for a long time in our research loggers, with sensor readings first getting buffered to the 4k to reduce the number of high-drain SD saves. Getting rid of those power hungry cards opened up the possibility of running the entire unit from the coin cell on the RTC module. But a power budget that small adds some complexity to the base code which must minimize CPU run-time and limit peak current.
Internal Resistance with 6.8mA Pulse Discharge, from the Energiser Cr2032 datasheet. (Note: The 8mhz ProMini only draws about 3.3mA when its running)
Most garden-variety chips have a lower operating limit of 2.7v – so a 3v Cr2032 can only fall about 300mv under load before triggering the brown out detection (BOD) circuit. Voltage droop changes over time because the internal resistance of a coin cell is only 10 ohms when new, but increases to 100 ohms by end of life. According to Maxell’s 1Meg-ohm (3.3µA continuous) discharge test, coin cells should stay at their voltage plateau until they deliver about 140mAh. In our testing, 200uF battery buffering capacitors can extend runtime up to 30% but this varied with the quality of the battery. Of course, if you can reach a year without the rail buffer, then you’ve probably filled the memory. So rail caps may only be necessary with high-drain sensors or in low temperature deployments where the battery chemistry slows. It’s not unusual to see a 50mv delta at the battery terminals for every 15°C change in ambient so a standard coin cell will not power the logger very long at temperatures very far below freezing.
However, theres only so much you can predict about system behavior in the real world – especially with stuff constructed from cheap modules carrying half a dozen unspecified components. So let’s just build one and see how it goes.
Clipping the main VCC leg (the 2nd leg in from that corner) forces the DS3231 to run from the backup power input on the other side of the chip.
Disconnect the modules indicator LED by removing its limit resistor with a hot soldering iron tip.
Remove the 200ohm charging resistor & bridge VCC to the coin cell backup power at the black ring end of diode.
Running from Vbat depowers most of the logic inside the DS3231 and disables the 32kHz output (I usually cut away that header). According to the -M and -SN datasheets, but both RTC chips draw the same averageof 3.0µA timekeeping battery current to keep only the oscillator, temperature compensation & comparator working. The default 4k EEprom draws between 20-40nA when not being accessed. Bridging VCC directly to Vbat also means a 3.3v UART will push some sub-milliamp reverse currents through an older coin cell. Despite dire manufacturer warnings that reverse currents beyond 1µA will heat manganese-dioxide/lithium cells until they explode, I’ve yet to have a single problem (or even any detectable warming) with loggers connected for many days during code development. Drift on these RTCs is usually a loss of ~3-5 seconds per month, but you can reduce this considerably by calibrating the DS3231 Aging Offset Register with a GPS module. The only annoying issue with these RTC’s is that, once enabled, the alarms continue to be generated forever unless you set an ‘unreachable’ next alarm time, or shut down the main oscillator completely.
3 mods to the RTC module It’s a good idea to test of those RTC modules (with the logger base-code) before assembling your logger. After successful testing, I add conformal to the RTC & let dry before joining the two modules.
OPTIONAL: Add another EEprom to the RTC module
With a practiced hand you can add more EEprom memory right onto the RTC module. 64k is the sweet spot for single sensors generating 2-byte integers because they can store ~340 days of data at a 15min. interval. Each additional EEprom adds 0.2 to 0.4µA to the loggers overall sleep current. 64K’s are about $1 up to 256K chips selling for about $3.50 at Digikey. AT-series EEproms larger than 64k, show up on the I2C bus as multiple 64k chips.
For an AT24c512 you only need connect the four pins shown because the chip internally grounds any pin left floating. The rtc module pulls all connected address pins high (setting the lower 4k to 0x57) but an upper 64k EEprom with the connections above would go to 0x50.
Stacked EEproms in a 0x57 & 0x51 configuration. If this soldering is a bit too advanced see the ‘adding sensors’ section for a way to increase storage space with modules. Of course you can stack on those boards as well!
PREPARE the Pro Mini:
Carefully clip away the regulator from the 2-leg side to prevent 80µA of back-leakage thru the regulator under battery power.
Clip away the reset switch. This logger can only be started with serial commands via a UART connection.
Remove the limit resistor for the power indicator LED with a hot soldering iron tip. This is near the regulator area.
An Arduino Pro Mini continues at the heart of our loggers because it is still the easiest low-power option for projects that aren’t computationally demanding. 328p’s normally sleep well below 1µA with the BOD turned off but 17µA with BOD left on. It’s worth noting that component level testing of the tiny sleep current on an isolated ProMini board requires stable battery power, as UART supplied voltages are too noisy for sensitive devices like a Current Ranger. You will occasionally get clones withfake Atmel chipsthat won’t go below ~100µA sleepno matter what you do.
At 8Mhz the ‘official’ lowest safe voltage for the 328p is 2.7v and the BOD cutoff circuit is always on while the processor is running. So you want to stop logging if the rail starts falling below ~2850mv. Keep in mind that there is wide range of actual brownout thresholds, from a minimum of 2.5v to a max. of 2.9v. So if you get a logger that consistently quits while the battery voltage is still high, you might want to check that units actual BOD trigger point with a bench power supply.
Attach UART header pins and trim three of the tails flush to avoid accidental contact later.Do not progress with the build until you have confirmed the ProMini has a working bootloader by loading the blink sketch from the IDE.
OPTIONAL: Add a Thermistor, LDR & Indicator LED
One of my favorite sensor combinations for this logger is an NTC thermistor & CDS cell which adds nothing to the sleep current. Range switching with two or more NTCs could also be done if the max / min resistance values of one thermistor can’t maintain your resolution requirements. We explained how to read resistive sensors using digital pins in 2019; so here I will simply show the connections. Add these passives to the Pro Mini BEFORE joining it to the RTC module, taking care not to hold the iron so long that you cook the components. Each [104] 0.1uF of capacitance you use in this circuit gives you about 3000 raw clock counts for the 10k reference resistor.
D6=10kΩ 1% metfilm reference resistor , D7=10k 3950 NTC, D8=300Ω (any type), D9=LDR (5528). Note that the LDR is optional, and could be replaced with any other type of resistive sensor up to ~65kΩ. A typical 10kNTC reaches that limit near -10°C and the LDR usually reaches that at night.
The code puts all lines that are not being read into input mode to isolate them from the circuit when reading each individual channel. With these sensors I usually jumper D2->SQW with a longer piece of flexible wire to avoid covering the D13 LED.
A 104 ceramic cap to GND completes the ICU timing circuit. With a 0.1uF as the charge reservoir, each resistor reading takes ~1-2msec in sleep mode IDLE. With this small capacitor, timing jitter produces a band of readings about 0.02°C wide on the output plots if temp. readings are repeated rapidly.
We created a how-to guide on calibrating thermistors which makes use of an online calculator to determine the equationcoefficients. You should never expect the NTC you have in your hands to match the values provided by its manufacturer, but even if it did our method leverages the behavior of the ProMini itself as part of the measuring system. So there is no point buying expensive interchangeable thermistors – the cheapest penny-parts will do just fine. I add a thermistor to all my loggers (even if they will eventually drive I2C sensors) because a good way to test new loggers is to read the NTC at a short one-second interval until the EEprom has been completely filled a few times. After that burn-in you can be sure the core of the logger is reliable before adding other sensors.
[OPTIONAL]Common cathode RGB w red leg cut on A0=GND, A1=Green, A2=Blue. The colors are lit via internal pullups to keep below 50µA. Leaving the RED LED on D13in place is useful to let you know when the bootloader has been triggered and that gets used by default if no other LED is enabled at the start of the logger code. The 1k D13 limit resistor must be left in place to use the onboard LED.
Join the Two Modules:
Resistor legs wrapped in heat shrink extend the A4/A5 I2C bus. These two wires must cross over each other to align with connections on the RTC.
Extend the VCC & GND headers with resistor legs.
Add a strip of double-sided foam tape across the chips on the RTC module and remove the protective backing.
Carefully thread the four I2C jumpers through the RTC modules pass-through port. Press the two boards together onto the double sided tape and solder the four connections.
OPTIONAL: use the VCC & GND wire tails to add an SMD tantalum rail buffering capacitor and trim away any excess wire. Anything from 220 to 1000µF will reduce battery voltage droop as the coin cell ages. 10V 470uF caps provide a good overall balance between buffering ability and leakage currents in the 50nA range. After checking polarity, flip the SMD solder pads to the upper surface for easier soldering. Rail buffering caps can extend runtime by 20-30%, depending on the quality of your coin cell, but they are not necessary for short-term logger operation.
Trim away the (non-functional) 32kHz pin and tin to the SQW header pin. Solder a resistor leg (or a short length of 26AWG wire) to interrupt input D2 on the Pro Mini.Add heat shrink to the D2 jumper & solder it to the SQW alarm header pin.
The minimum 2-module stack usually draws about 1µA constant sleep current, but with it’s TXCO corrections the RTC alone brings the average to at least 3µA. Cheap modules often have leftover flux which can cause current leaks. It’s worth the time scrub these boards with alcohol before assembly. I found no significant difference in sleep current between setting unused pins to INPUT_PULLUP or to OUTPUT_LOW.
The basic two-module combination usually sleeps around 1-2µA continuous. Most of that is the RTC’s timekeeping current as a 328p based ProMini only draws ~150nA in power-down (with regulator removed & BOD off) and the 4k eeprom should be less than 50nA in standby. If we assume four readings per hour at 5mA for 30msec, the battery life calculator at Oregon Embedded estimates a 220mAh battery will last more than 10 years…which is ridiculous. We know from the datasheet that the typical Ibat timekeeping current for the DS3231 is `700nA (dsheet pg7 with EN32KHZ=0 @3v) but TXCO temperature conversions bring the RTC average up to 3µA – which can’t be seen on this direct measurement. And there’s the battery self discharge of 1-3% per year. Perhaps most important there’s the complex relationship between pulsed loads and CR2032 internal resistance, which means we’ll be lucky to get half the rated capacity before hitting the typical 328p brown-out at 2.77v. A more realistic estimate would start with the assumption that the battery only delivers about 110mAh with our logger consuming whatever we measure + 3µA (RTC datasheet) + 0.3µA (coincell self-discharge). For conservative lifespan estimation we can round that all up to about 5µA continuous, with four 5mA*10millisecond sensor readings per hour, and we still getan estimated lifespan of about two years. So the most significant limitation on this logger is usually the EEprom memory. It’s worth noting that newer DS3232 variants of the RTC lets you push the TXCO corrections out to 512 seconds. This lowers the average RTC standby to about 1µA -if- you are willing to spend $10 for the RTC chip alone on DigiKey.
Build video: ( 3 minute rapid review)
Note: the order of operations in the video is slightly different from the photos.
In 2023 the logger base code was updated to match the newer the e360 model, being essentially identical except for the changes required for the alternate NTC and LED connections. The code requires the LowPower.h library for AVR to put the logger to sleep between readings and this can be installed via the library manager.
One important difference between a coin cell powered logger and our older AA powered models is that the battery has a high probability of being depleted to the point of a BOD restart loop. (which causes rapid flashing of the D13 LED) So we use a multi-step serial exchange in Setup() to prevent data already present in the EEprom from being overwritten by an accidental restart.
Setup()
Purple callouts on the following flow diagrams indicate the places that would need to be altered to add a new sensor to the base code. If all you do is enable sensors via defines at the start of the program you won’t have to deal with the code that stores the data. However to add a new sensor you will need to make changes to the I2C transaction that transfers those sensor readings into the EEprom ( with matching changes to the sendData2Serial function that reads them back out later).
Note: The options on the startup menu will change over time as new code updates are released.
A UART connection is required at start-up so those menu-driven responses can occur through the serial monitor in the IDE. These have 8minute timeouts to avoid running the CPU too long during unintentional restarts. The menu sequence can be re-entered at any time simply by closing & re-opening the serial monitor window: This restarts the Pro Mini via a pulse sent from the UARTs DTR (data terminal ready) pin.
If you see random characters in the serial window, you have the baud rate set incorrectly. Reset the baud to 500,000 and the menu should display properly HOWEVER you also need to close & re-open the window. If you copy data from the serial monitor when the window still has garbled characters, then only the bad starting characters will copy out. On a new logger the Hardware, Calibration & Deployment fields will display rows of question marks until you enter some text via each menu option. There are additional debugging options that are not displayed unless serial out is enabled, and this menu will CHANGE over time as I add new features.
The first menu option asks if you want to download the contents of the logger memory to the serial monitor window. This can take up to 2 minutes with 256k EEproms at 500000 baud, which is the fastest rate an 8MHz ProMini can reliably sustain. Then copy/paste everything from the IDE window into an Excel sheet. Then below the data tab, select Text to Columns to separate the data fields at the embedded commas. Or you can paste into a text editor and save as a .csv file for import to other programs. While that process is clunky because the IDE’s serial interface doesn’t export, everyone already has the required cable and data retrieval is driven by the logger itself. And yes, the exchange could also be done with other serial terminal apps like PuTTY with logging turned on, CoolTerm, or Termite with the ‘logfile’ filter. You can also redirect Windows COM port output to a file, although it seems limited to only 19k baud. These options may be required on builds with large memory expansions as the IDE serial monitor starts to forget the initial data in it’s buffer after displaying about 100,000 lines. I have successfully used Termite at 1,000K baud downloading an 8mhz logger, but when the amount of data gets large the ProMini starts limiting you to an effective 500K baud. Functions like itoa() before serial.print() or using Rob Tillaart’s splitDigits4 & splitDigits10 with serial.write() speeds things up considerably. In summary, 250k BAUD is stable, 500k is usually fine with occasional serial dropouts in a very long download, while 1,000k Baud exhibits frequent flaky behaviors with the IDE’s serial monitor. Those character dropouts don’t happen using Coolterm or PuTTY.
Vref compensates for variations in the reference voltage inside the 328p processor. Adjusting the 1126400 (default) value up or down by 400 raises/lowers the reported voltage by 1 millivolt. Adjust this by checking the voltage supplied by your UART with a multimeter while running the logger with #define logCurrentBattery enabled and serial output ON. Note the difference between the millivolts you actually measured and the current voltage reported on screen, and then multiply that difference by 400 to get the adjustment you need to make to vref for accurate battery readings. After you [7] Change Vref and enter the adjusted number it will be used from that point onward. Stop adjusting when you get within ±20mv.
After the start menu sequence the first sampling time is written to the internal EEprom so the timestamps for sensor readings can be reconstructed during data retrieval later by adding offsets added to the starting time. This technique saves a significant amount of our limited EEprom memory and all it takes is =(Unixtime/86400) + DATE(1970,1,1) to convert those Unix timestamps into human readable ones in Excel. It is important that you download the old data before changing the sampling interval via the startup menu option because the interval stored in EEprom is also used to reconstruct the timestamps. Valid sampling intervals must divide evenly into 60 and second-intervals can be set for rapid testing if you first enter 0 for the minutes.
No data is lost from the EEprom when you replace a dead coin cell and you can do the entire data retrieval process on UART with no battery in the logger. But the clock time should only be reset after installing a new battery or it will not be retained. If the time in the serial menu reads 2165/165/165 165:165:85 instead of 2000/01/01 after a power loss then there’s a good chance the RTC’s memory registers have been corrupted & the RTC module needs to be replaced. I’ve managed to do this to a few units by accidentally shorting the voltage to zero when the logger was running from a capacitor instead of a battery.
After setting the clock time and deployment parameters like the sampling interval, the logger will request the user to manually type a ‘start’ command before beginning a run. Only when that second ‘start’ confirmation is receivedare the EEproms erased by pre-loading every location with ‘Zeros’ which also serve as End-Of-File markers during download. The selected LED will then ‘flicker’ rapidly to indicate that the logger is ‘waiting’ until the current time aligns with the first sampling alarm before beginning the run.
Main LOOP()
EEprom writes usually draw about the same 3mA current the ProMini draws during CPU up time. ‘Lowest’ battery voltage is checked immediately after the data transfer because an unloaded Cr2032 will always read nominal – even when it’s nearly dead. Timing of events at that point is critical because EEproms freeze if the rail voltage fluctuates too much while they are actively writing – so don’t change those LowPower.idle sections of the code! Also don’t run OLED screens during sensor readings or EEsaves because they are noisy as heck. Logger shutdown gets triggered at the end of the main loop if the EEprom save brings the rail voltage below the 2850mv systemShutdownVoltage or if the memory is full.
Adding I2C Sensors to the logger:
The minimum configuration for this logger can log the 0.25°C temperature record from the DS3231, index-encoded to one byte per reading. Approximately 4000 of these readings can be stored in the 4k EEprom on the RTC module. This works out to a little more than 40 days at a 15 minute sampling interval.
We made extensive use of RTC temperature records in our cave drip loggers at the beginning of the project. The accuracy spec is only ±3°C, but most were within ±1 of actual @ 25°C and you can calibrate them against more accurate sensors. The RTC only updates its temperature registers every 64 seconds and any temperature sensors inside the body tube will have about 15 minutes of thermal lag relative to outside air.
The 4K AT24c32 on the RTC module fills rapidly with sensors that generate 2 or 4 byte integers. An easy solution is to combine your sensor module with 32k (AT24c256), or 64k (AT24c512) chips so the sensors bring the extra storage space they will need. These EEprom modules can usually be found on eBay for ~$1 each and after you update the EEpromI2Caddr & EEbytesOfStorage defines at the start of the program, all AT series chips will work with the same code as the default 4k.
The headers on this common BMP280 module align with the 32k headers in a ‘back-to-back’ configuration. The tails on the YL-90 breakout extend far enough to connect the two boards. Note this sensor module has no regulator which is preferred for low power operation.
Pin alignment between the YL-90 and this BH1750 module is slightly more complicated as you can’t cover the light sensor.
Clip away the plastic spacers around the header pins. Then wiggle the BH1750 over the headers on the 32k module. Solder the points where the pins pass through the 1750 board. Note: I2C pullups on the sensor boards can usually be left in place on this low voltage system.
I2C pin order on the RTC doesn’t align with the BH1750 module. So you need to make the required cross-overs with a 4 wire F-F Dupont. Soldering those connections is more robust but do that after calibrating the thermistor.
In addition to the NTC / LDR combination, support for both of the sensors shown above is included in the code on Github although you will need to install hp_BH1750 and BMP280_DEV with the library manager to use them. Sensors are enabled by uncommenting the relevant defines at the start of the program.No matter what combination of sensors you enable, thetotal bytes per recordmust be1,2,4,8 or 16. Otherwise you will get a repeating error at download because trying to save data beyond a hardware page-boundary inside the EEprom over-writes previously saved data at the start of that same page/block.
I2C devices are usually rated to sink about 1mA which on a 3v system would require 3300 ohm pullups. This means you can leave the 10k’s on those sensor modules to bring combined bus pullup ( incl. 4k7 on the RTC & 35k pullup on the ProMini pins) to 2930 ohms, which is close to the 3.3k ideal. The open-drain implementation of I2C means that adding more capacitance to the bus will round off the rising edges of your clock and data lines, which might require you to run the bus more slowly with multiple sensors or with longer wires to your sensors. In those cases you can drop the total bus pullup to 2k2 to offset the capacitance.
The 662k LDO regulator on most eBay sensor modules increase the loggers sleep current by 6-8µA due to back leakage. For long deployments this can be removed & then bridging the in->out pads should bring your sleep back to ~1-2µA. That regulator is below spec any time your supply falls below ~3.4v which is > than the initial over-voltage on a Cr2032.
You must use low power sensors with a supply range from 3.6v to our 2.7v BOD cutoff. A good sensor to pair with this logger should sleep around 1µA and take readings below 1mA. You are more likely to find these no-reg sensor modules with low power operation at Sparkfun or on Tindie, than you are on eBay/Amazon. A coin cell simply doesn’t have enough power to supply a high drain CO2 sensor or GPS unless you take heroic measures. Sometimes you can pin-power sensors that lack low current sleep modes although if you do that be sure to check if that creates current leaks in other places such as pullup resistors or the I2C bus may go into an illegal state (idle is supposed to leave SCL & SDA lines high) requiring a power reset of all the sensors on the bus. Choose sensor libraries which allow non-blocking reads so you can sleep the ProMini while the sensor is gathering data and replace any delay() statements in those libraries with 15msec powerdown mode sleeps.
30ml self-standing Caplugs from Evergreen Labware are a good housing option because they have a brace in the cap that just fits four 22gauge silicone jacket wires. The ‘non-sterile’ versions with separate caps are much cheaper to buy than the sterile ones. The outer groove in the lid provides more surface area for JB-weld epoxy, giving you an inexpensive way to encapsulate external sensors. 1oz / 25ml is enough to cover about five sensors. Then clear JB weld can be used as a top-coat to protect optical sensors.
Drill the central channel to pass the I2C wires through the cap. Roughen the upper surfaces with sandpaper to give it some tooth for the epoxy.
Conformal coat the board before the epoxy. Work the epoxy over the sensor board carefully with a toothpick and wipe away the excess with a cotton swab.
We’ve done pressure tests to 45psi and these tubes can be deployed to ~20m depth, although we don’t yet have any data yet on how long they will endure at that pressure. These housing tubes should be replaced every three months if they are exposed to sunlight because UV makes the plastic brittle. Adding a very small amount of silicone grease to only the upper edge of the tube before closing improves the seal with the lid but don’t add too much or the threads will slip. Holes drilled through the bottom stand enable zip-ties to secure the logger. In our cross calibration of a Bh1750 Lux sensor to measure PAR (Photosynthetically Active Radiation) , we wrapped the body tubes with 2″ aluminum foil tape to reduce heat gain inside the body tubes.
We have produced small printable rails for the 30ml tubes often used with this two module logger. So here is a link to that shared model on Tinkercad. That internal rail & an external mounting bracket is posted at Github. The easiest way to secure the logger to these rails is with a drop of hot glue from the underside but I usually twist and solder the legs of a scrap resistor as the tie as I have lots of these lying around:
Insert a scrap resistor into the mounting holes and twist the legs together.
Solder the twisted legs and trim. Angle the joint inwards to avoid scratching the tube.
Angle the I2C headers slightly toward the point of the tube to leave more room for Dupont connectors during the NTC calibration.
There’s room for two or three 0.5gram silica gel desiccant packs in the lid area. Because the ProMini remains exposed, I don’t usually add conformal to the ProMini until the logger has passed all of its pre-deployment run tests. After that I add a generous layer of conformal to everything but the battery contacts and the header pins. Clear nail polish also works for this.
For deeper aquatic deployments, you could use a stronger PET preform for the enclosure. These have very thick walls because they are the blanks that are thermally blow-molded to create soda bottles. You will need to find ones larger than the standard 2L bottle preforms which have an internal diameter of only 21mm. This is just a bit too tight for the RTC module. The 30ml centrifuge tubes shown above have an internal diameter of 26-27mm.
Testing Your New Logger
Make at least two machines at a time. I usually build in batches of six, and for every dozen at least one usually ends up with some kind of issue like an RTC temp. register outside the ±3°C spec, or a ProMini with one of those fake 328p processors that draws too much sleep current. Having more than one logger makes it easy to identify when you’ve got an a hardware problem rather than an error in your code. Even then, no unit is worth more than an hour of troubleshooting when you can build another one in less time. Seriously! The part cost on these things is well below $10, which is often less than you’d pay just to replace the battery on other loggers. This also applies to maintenance: just run till it fails and then replace it. In our experience, most inexpensive sensors have a reliable lifespan of less than two years in the field.
A good general approach to testing any DIY build is to check them with a doubling schedule: Start with rapid UART tethered tests via the serial monitor at second intervals, then initial stand-alone tests for 1,2,4 & 8 hours till you run overnight, followed by downloads after 1day, 2days, 4days, 8days, etc. For those initial burn-in tests, set the interval short enough that the entire memory gets filled. Valid sampling intervals must divide evenly into 60 and second-intervals can be set for rapid testing if you first enter 0 for the minutes.
Occasionally you get an RTC module with a really weak contact spring and this generally shows up as battery readings that jump all over the place, or even as unexpected quits from accidental bumps disconnecting power. A small bit of double sided foam mounting tape behind the spring will usually make the battery connection quite robust.
The shape of the battery burn-down curve during your pre-deployment testing is and excellent predictor of reliability! But to use that information you need to be running several identical machines at the same time, and start those runs with fresh batteries. I use the cheapest batteries I can get for these tests, knowing the better quality batteries I use on deployment will last much longer.
Remember that eBay/Ali/Amazon sensor modules are cheap for a reason and it’s not unusual to see 20% of them rejected for strange behavior or infant mortality. So huddle test each batch to normalize them. Relative accuracy spec for the BMP280 is supposed to be ±0.12 millibar, but when I run a batch of them side-by-side I usually see ±4 millibar between the records. Cheap BMEs sometimes refuse to operate with it’s individual RT/T/Pr sensors set at different oversampling levels, and at the highest resolution (16x oversampling) that sensor may draw more than your power budget can sustain over a long deployment. Real-world installation inevitably exposes the logger to condensing conditions. Sensors with a metal covers (like the BMP/E series) will experience internal condensation at the dew point. Moisture creep is the largest cause of data loss on the project after theft/vandalism. So cleaning leftover flux from all parts with cotton swabs + 90% isopropyl alcohol before & after assembly is always worth your time. So is conformal coating and you can use clear nail polish for that if silicone coatings are hard to find.
And all the other quid-pro-quos about vendors apply: Split your part orders over multiple suppliers with different quantities, ordered on different days, so you can isolate the source of a bad shipment and/or identify suppliers that are OK. Don’t be surprised if that batch of sensor boards you ordered transmogrifies into a random delivery of baby shoes. Amazon is often cheaper than eBay and AliExpress is 1/4 the price of both. Trusted suppliers increase part costs by an order of magnitude but that may still be worth it if you don’t have time for enough test runs to eliminate the duds.
Power Optimization on this Data Logger:
A (relatively high) average sleep current of ~5µA*86400 sec/day would use ~432 milliAmpseconds/day from a Cr2032 that can provide roughly 360,000 mAs of power [100mAh] on its main voltage plateau . Any power saving strategy must be weighed against this daily amount to determine if the complexity it adds to your code will deliver a proportional increase in operating time. I rarely see a sensor sample reading use more than 1 milliamp-second of power – even with relatively high drain sensors like the BME280. So most of the power used by this logger is due to the DS3231 RTC’s 3µA timekeeping current which can not be changed but battery voltage droop during peak current events is usually what triggers the low voltage shutdown and that is affected by code execution.
8MHz ProMini boards draw about 3.5mA when running at 3v. Slow functions like digitalWrite() and pinMode() are replaced with much faster port commands wherever power and/or timing are critical. Pin states, peripheral shutdowns (power_all_disable(); saves ~0.3mA) and short sleeps are used throughout for battery voltage recovery. Waking the 328p from those powerdown sleeps takes 16,000 clock cycles (~2milliseconds @8MHz +60µS if BOD_OFF) and but the ProMini only draws ~300µA while waiting for the oscillator to stabilize. These wakeups only use about 1mAs/day.
The original code released in 2022 used CLKPR to bring the ProMini down to 1MHz (lowering runtime current from 3.5mA to ~1.3 mA) however testing later revealed that the total energy cost per logging event actually increased slightly when the system clock was divided. In addition, I came across several EEproms that would freeze if I lowered the system clock to 1MHz during save. So I have removed the CLKPR calls to make the codebase more portable. I also found that the startup-time also gets multiplied by the CLKPR divider. This might be the only documentation of this on the web, so I’m leaving this information here – even though CLKPR is no longer relevant to the logger:
( Note: For the following images a Current Ranger was used to convert µA to mV during a reading of the RTC’s temperature register at 1MHz. So 1mV on these oscilloscope screen shots = 1µA is being drawn from the Cr2032 )
Here CLKPR restores the CPU to 8MHz just before entering powerdown sleep, and then slows the processor to 1MHz after waking. The extra height of that first spike is due to the pullup resistor on SQW. Cutting the trace to that resistor and using an internal pull-up reduces wake current by 750µA.
Here the 328p was left CLKPR’d down to 1MHz when it entered powerdown sleep(s). Waking the processor now takes 16 milliseconds – wasting a significant amount of power through the 4k7 pullup on SQW while the RTC alarm is still asserted.
Using the 328s internal oscillator to save power is a non-starter because it’s 10% error borks your UART to the point it can’t upload code. Our ICU based timing method also needs the stability of the external oscillator.
That bridge between the coin cell and VCC means UART connection time probably is shortening battery lifespan a bit. Panasonic specifies: “the total charging amount of the battery during it’s usage period must be kept within 3% of the nominal capacity of the battery”, so it’s a good idea to remove the coin cell if you are spending an extended time on serial. But given our tight operational margin we can’t afford to lose 200mv over a Schottky protection diode. A typical solution would address this by ORing the two supplies with an ideal diode circuit but that’s not a option here as ideals usually waste some 10-20 µA. On a practical level it’s easier to just topop in a fresh battery before every long deployment.
EEprom & sensor additions usually push directly measured continuous sleep currents to 2µA (so ~5 µA average when you add the RTC temp conversions)but that still gives a >1 year estimates on 110mAh. With all due respect to Ganssle et al, the debate about whether buffering caps should be used to extend operating time is something of a McGuffin because leakage is far less important when you only have enough memory space for one year of sensor readings. Even a whopper 6.3v 1000µF tantalum only increases sleep current by ~1µA. That’s 1µA*24h*365days or about 10 mAh/year in trade for keeping the system well above the 2.8v cutoff. That means we don’t need to lower the BOD with fuse settings & custom bootloaders. When you only service your loggers once a year, any tweaks that require you to remember ‘special procedures’ in the field are things you’ll probably regret.
Capacitor leakage scales linearly so usethe Falstad simulator to see what size of rail buffer you actually need. Capacitors rated 10x higher than the applied voltage reduce leakage currents by a factor of 50. So your buffering caps should be rated to 30v if you can find them. The 220µF/25v 227E caps I testedonly add ~15nAto the loggers sleep current and these can be obtained for <50¢ each. (& 440uF 10v caps leak around 25nA) High voltage ratings get you close to the leakage values you’d see with more expensive Polypropylene, Polystyrene or Teflon film caps and moves away from any de-rating issues. The one proviso is that as the buffering cap gets larger you will need to add more ‘recovery time’ in the code before the rail voltage is restored after each code execution block. Sleeping for 30msec after every I2C transaction is a safe starting point, but you’ll need a scope to really tune those sleeps for large sensor loads like you see with a BME280 at 16x oversampling. If moisture condenses inside the housing on deployment, and the logger mysteriously increases from 1-2 µA sleep current to something higher then replacing the tantalum rail buffering cap is one of my first diagnostic steps.
In the next three images, a Current Ranger converts every 1µA drawn by the logger to 1mV for display on the ‘scope. The last two spikes are transfers of 16-bytes into the 4K EEprom on the RTC module while the CPU takes ADC readings of the rail voltage. Note that our current code saves readings as a single event at the end of each pass through the main loop, but I forced multiplelargesaves for this test to show the effect of repeated pulse-loads:
A triple event with a temperature sensor reading followed the transfer of two array buffers to EEprom. Battery current with no rail buffering cap. [Vertical scale: 500µA /division, Horizontal: 25ms/div]
Here a 220µF tantalum capacitor was used to reduce the peak battery currents from 2.5mA to 1.5mA for that same event.
Here a 1000µF tantalum [108J] capacitor reduces the peak battery current to 1mA. The 30msec sleep recovery times used here are not quite long enough for the larger capacitor.
Voltage across a coin cell that’s been running for two months with NO buffering capacitor. The trace shows the 2.5mA loads causing a 60mv drop; implying the cell has ~24 ohms internal resistance. [Vertical Scale: 20mv/div, Horizontal: 25ms/div]
The minimal RTC-only sensor configuration reached a very brief battery current peak of ~2.7mA with no buffering cap, 1.5mA with 220µF and less than 1mA with 1000µF. The amount of voltage drop these currents create depend on the coin cells internal resistance but a typical unbuffered unit usually sees 15-30mV drops when the battery is new and this grows to ~200mV on old coin cells. The actual voltage dropalso depends on time, with subsequent current spikes having more effect than the first as the internal reserve gets depleted. The following images display the voltage droop on a very old coin cell pulled from a logger that’s been in service since 2016 (@3µA average RTC backup)
This very old coin cell experiences a large 250mv droop with no capacitor buffer. Note how the initial short spike at wakeup does not last long enough to cause the expected drop. [Vertical: 50mv/div, Horizontal: 25ms/div]
Adding a 220µF/25v tantalum capacitor cuts that in half but triples the recovery time. CR2032‘s plateau near 3.0v for most of their operating life, so the drop starts from there. [Vertical: 50mv/div, Horizontal: now 50ms/div]
A 1000µF/6.3v tantalum added to that same machine limits droop to only 60mv. Recharging the capacitor after the save now approaches 200 milliseconds. [Vertical : 50mv/div, Horizontal: 50ms/div]
According to Nordic Semi: “A short pulse of peak current, say 7mA for 2 milliseconds followed by an idle period of 25ms is well within the limit of a Cr2032 battery to get the best possible use of its capacity.” After many tests like those above, our optimal ‘peak shaving’ solution is to run the processor at 8MHz, breaking up the execution time with multiple 15-30 millisecond POWER_DOWN sleeps before the CR2032 voltage has time to fall very far. (especially necessary if you start doing a lot of long integer or float calculations) This has the benefit that successive sensor readings start from similar initial voltages but those extra sleeps can easily stretch the duration of a logging event out toward 300 milliseconds – putting limits on the loggers maximum sampling rate:
Current drawn in short bursts of 8MHz operation during sensor readings. The final EEprom save peaks at ~2.75mA draw (in this old example with CLKPR 1MHz CPU which we no longer do) [CH2: H.scale: 25msec/div, V.scale 500µA/div converted via Current Ranger]
Voltage droop on that same ‘old’ CR2032 used above reached a maximum of 175mv with NO buffering capacitor across the rail. This battery has about 64 ohms of internal resistance. [CH2: V.scale 25mv/div, H.scale 25ms]
Adding a 220µF tantalum capacitor to the rail holds that old battery to only 50mv droop. The 25v tantalum cap adds only 0.1µA leakage to the overall sleep current. [CH2: V.scale 25mv/div, H.scale 25ms]
EEprom save events are typically around 3.5 mA for 6ms. Without a rail buffer a new coincell will fall about 100mv. With a 200µF rail buffering cap supplying the initial demand the peak current drawn from the coin cell is less than 1.5mA – which limits the overall voltage droop to less than 50mv. Even with very old batteries a typical EEsave event doesn’t usually drop the rail more than 150mv with a rail buffer cap, however the recovery time growssignificantly with battery age –from less than 25 msec when new to more than 150 milliseconds for a full recovery. So old battery logging events look more like ‘blocks’ on the oscilloscope trace rather than the series of short spikes shown above.
This ‘solder-free’ AT24c256 DIP-8 carrier moduleis bulky but it lets you easily set multiple I2C address. Here I’ve removed the redundant power led & pullup resistors. Heliosoph posted a way to combine multiple EEproms into a single linear address range
Even with fierce memory limitations we only use the 328’s internal 1k EEprom for startup index values and text strings that get written while still tethered to the UART for power. EEprom.put starts blocking the CPU from the second 3.3msec / byte, and internal EEprom writing adds an additional 8mAto the ProMini’s normal 3mA draw. This exceeds the recommended 10mA max for a garden variety Cr2032. Multi-byte page writes aren’t possible so data saved into the 328p costs far more power than the same amount saved to an external EEprom. However it is worth noting that reading from the internal EEprom takes the same four clock ticks as an external with no power penalty, while PROGMEM takes three and RAM takes two clock cycles. So it doesn’t matter to your runtime power budget where you put constants or even large lookup tables.
A simple optimization we haven’t done with the code posted on GitHub is to buffer data into arrays first, and then send that accumulated data with larger wire library buffers. All AT-series EEproms can handle the 4k’s 32-byte page-write but the default wire library limits you to sending only 30 bytes per exchange because you lose two bytes for the register location. So to store sensor readings in 32-byte buffer arrays and transfer those you need to increase the wire library buffers to 34 bytes. This has to be done by manually editing the library files:
In wire.h (@ \Arduino\hardware\arduino\avr\libraries\Wire\src) #define BUFFER_LENGTH 34 AND in twi.h (@ \Arduino\hardware\arduino\avr\libraries\Wire\src\utility) #define TWI_BUFFER_LENGTH 34
That twi buffer gets replicated in three places so the wire library would then require proportionally more variable memory at compile time . With larger EEproms you could raise those buffers to 66 bytes for 64 data-byte transfers. It’s also worth mentioning that there are alternate I2C libraries out there (like the one from DSS) that don’t suffer from the default wire library limitations. AT series EEproms always erase & rewrite an entire page block no matter how many bytes are sent, so increasing the number of bytes sent per save event reduces wear and can save significant amounts of power. In my tests, newer larger EEproms tend to use about the same power as smaller older EEproms for identical save events because even though they are re-writing larger blocks, they do these internal operations much faster. So a ‘typical’ EEprom write event uses somewhere between 0.30 to 0.5 millamp-seconds of power no matter how many bytes you are saving. If your daily sleep-current burn is about 300milliampseconds, then it takes a few hundred of those EEprom save events to use the same amount of power. Increasing the transfer payload (with temporary accumulation arrays) from the I2C default of 16 bytes to 64 bytes cuts EEsave power use by 75%. That can extend the loggers operating life with short sampling intervals of 1-5 minutes, or where your sensors generate many bytes per record. Despite several technical references saying otherwise, I saw no significant difference in save duration or power (on the oscilloscope) with EEprom locations prewritten to all zeros or 0xFF before the data save events. One thing that does make an enormous difference is transferring blocks that exactly match the EEproms hardware page size – if you get the alignment perfect then EEproms can write the new information without all the preload & insertion operations you see when saving smaller amounts of data. This cuts both the time and the power for EEprom saving by 50% if you have enough memory for all the pre-buffering that requires.
Because of the code complexity we have not implemented array buffering in the current code-build so that the codebase is understandable for beginners. Every pass through the main loop saves data to the external EEprom and these loggers still have an excellent operating lifespan without it. For many EEprom types, when doing a partial write of fewer bytes than the hardware page size, the data in the rest of the page is refreshed along with the new data being written. This will force the entire page to endure a write cycle, so each memory location in the EEprom may actually get re-written [EEprom hardware page size / Bytes per Save ] times, which for the 4k would typically be 32/4 = 8 times per run. EEproms have a ‘soft’ wear limit of about 1,000,000 write cycles, so even in that worst-case scenario the logger could fill that chip 125,000 times before wearing the EEprom out. But buffering can make the eeprom last longer, or extend operating life in ways other than the battery power that’s saved.
FRAM takes about 1/50th as much power to write data compared to standard EEproms but those expensive chips often sleep around 30µA so they aren’t a great option for low-power systems like this logger unless you pin-power the chips so you can disconnect them during sleep. FRAM can endure more than 10 billion, 50 nano-second, write cycles, making it better suited for applications where rapid burst-sampling is required. The I2C bus is not really fast enough take advantage of FRAMs performance, but with the SD card removed from the logger the four SPI bus connections are now available. Once your code is optimized, the majority of the loggers runtime power is consumed by the 328p burning 3.5mA while it waits around for the relatively slow I2C bus transactions – even with the bus running at 400khz.
Here wires extend connections for the thermistor & LED to locations on the surface of the housing.
No matter what optimizations you make, battery life in the real world can also be shortened by thermal cycling, corrosion from moisture ingress, being chewed on by an angry dog, etc. And you still want the occasional high drain event to knock the passivation layer off the battery.
An important topic for a later post is data compression. Squashing low-rez readings into only one byte (like we do in the base code with the RTC temperature & battery voltage) is easy; especially if you subtract a fixed offset from the data first. But doing that trick with high range thermistor or lux readings is more of a challenge. Do you use ‘Frame of Reference’ deltas, or XOR’d mini-floats? We can’t afford much power for heavy calculations on a 328p so I’m still looking for an elegant solution.
Some Run Test Results
Since we covered adding BM & BH sensors, here’s a couple of burn-down curves for the two configurations described above. Both were saving 4 bytes of data per record every 30 minutes giving a runtime storage capacity of about 150 days. In this test, Battery was logged each time 16-byte buffer-arrays were written to a 32k EEprom. Both loggers have a measured sleep current of ~1.5µA and they were downloaded periodically. Although the curve spikes up after each download, these are runs used the same coin cell battery throughout:
Cr2032 voltage after 11 months @30min sampling interval: BMP280 sensor reading Temp. & Pr. stored in 32k eeprom with NO 220µF rail buffering capacitor. This test run is complete. At x16 oversampling the BMP uses considerably more power than the BH1750.
Coin cell after more than 12 months @30min sampling interval: BH1750 sensor & 32k ‘red board’ EEprom (Sony brand battery: again, with no rail buffer cap). Both of these records show only the lowest battery reading in a given day.
I ran these tests without a rail buffering cap, to see the ‘worst case’ lifespan. A pulse loaded Cr2032 has an internal resistance of ~20-30Ω for about 100 mAh of its operational life, so our 3.5mA EEprom writing event should only drop the rail 100mv with no rail buffer cap. But once the cell IR approaches 40Ω we will see drops reaching 200mv for those events. The CR2032’s shown above have plateaued near their nominal 3.0v, so we will see the rail droop to ~2800mv when the batteries age past the plateau. Again, our tests show that with a 220 µF rail capacitor those drops would be reduced to less than 50mv and with 1000µF the battery droop is virtually eliminated.
Note that the UART download process briefly restores the voltage because the 3.3v UART adapter drives a small reverse current through the cell. I think this removes some of the internal passivation layer, but that voltage restoration is short lived. On future tests I will enable both logCurrentBattery (after wake) and logLowestBattery (during EEwrite) to see if the delta between them matches the drops I see with a scope.
And here we compare our typical logging events to the current draw during a DS3231-SN RTC’s internal temperature conversion (with a 220µF/25v cap buffering the rail). The datasheet spec for the DS3231 temp conversion is 125-200ms at up to 600µA, but the units I tested draw half that at 3.3v. On all three of these images the horizontal division is 50 milliseconds, and vertical is 200µA via translation with a current ranger:
Typical sampling event peaks at 450µA with a 220µF rail buffer cap. This logger slept for 15msec battery recoveryafter every sensor reading or I2C exchange.
Every 64 seconds a DS3231 (-N or -SN) temperature conversion draws between 200 to 300µA for ~150ms. There is no way to change the timing of the RTC conversions.
Occasionally the RTC temp conversion starts in the middle of a logging event, adding that current the peaks.
The datasheet spec for the DS3232-SN temp conversion is 125-200ms at up to 600µA, but the units I tested draw half that at 3.3v. The rail cap can’t protect the coin cell from the SN’s long duration load so temp conversions overlapping the EEprom save may be the trigger for most low voltage shutdowns during deployment. The best we can do to avoid these collisions is to check the DS3231 Status Register (0Fh) BSY bit2 and delay the save till the register clears. But even with that check, sooner or later, a temperature conversion will start in the middle of an EEprom save event. These ‘collisions’ may be more frequent with the -M variants of the chip which do temperature conversions every 10 seconds when powered by Vbat, although they only take 10msec for the conversion instead of 150msec for the -SN. Seeing those conversions on an oscilloscope is one way to verify which kind of RTC you’ve got with so many -SN modules out there today being relabelled -M chips:
DS3231-M Temp. conversion: At 3v, this unit drew 230µA for 10milliseconds, but this occurs every 10 seconds. (1mV on scope = 1µA via Current Ranger)
DS3231-SN Temperature Conversion: At 3v, This chip drew 280µA for 130 milliseconds, every 64 seconds. (1mV=1µA via C.R.)
Given that the average timekeeping current is the same for both chips, we try to use ±2ppm SN’s for longer deployments instead of the ±5ppm M’s. In real world terms ±1ppm is equivalent to about 2.6 seconds of drift per month, and that’s what we see on most -SN RTCs. I’ve also seen occasional comments in the forums of some DS3231M oscillators stopping spontaneously during field deployment. Note that on several of the RTC modules the SQW alarms continue to be asserted even after you disable them in the control register (by setting the alarm interrupt enable A1IE and A2IE bits to zero) and this draws 6-700uA continuously through the pullup on the module.The only way to be absolutely sure the RTC alarm will not fire after a logger shut-down is to turn off the RTC’s main oscillator. We do this in the codes shutdown function, because you can just reset the time via the start menu before the next run. When you remove the coincell, DS3231 register contents are lost – usually becoming zeros when power is restored although the datasheet says they are ‘undefined’ at powerup. The RTC oscillator is initially off until the first I2C access.
If your code hangs during execution, the processor will draw 3.5mA continuously until the battery drains and the logger goes into a BOD restart loop with the D13 red led flashing quickly. The logger will stay in that BOD loop from 4-12 hours until the battery falls below 2.7v without recovering. This has happened many times in development with no damage to the logger or to any data in the EEprom.
Most of the units I’ve tested trigger their BOD just below 2.77 volts. And 10 to 20 millivolts before the BOD triggers the internal voltage ref goes a bit wonky, reporting higher voltages than actual if you are using the 1.1vref trick to read the rail. The spring contact in the RTC module can be weak. That can trigger random shutdowns from large voltage drops so I usually slide a piece of heat-shrink behind it to strengthen contact with the flat surface of the coin cell. The rail capacitor protects the unit from most impacts which might briefly disconnect the spring contact under the coin cell. However hard knocks are a such common problem during fieldwork that we use a drop of hot glue to lock the RTC coin-cell in place before deployment. Normal operation will see 40-50mv drops during EEprom saves up to with 200µF rail buffers. If those events look unusually large or rail voltage recovery starts stretching to 100’s of milliseconds on the scope you probably have poor battery contact. Even with good contact, long duration loads can deplete the rail buffering cap so a 200µF reaches the same v-drop as a ‘naked’ battery after ~8-10msec, and 1000µF after ~15-20msec. In all cases, your first suspect when you see weird behavior is that the coin cell needs to be replaced.
Another thing to watch out for is that with sleep currents in the 1-2µA range, it takes a minute to run down even the little 4.7µF cap on the ProMini boards. If you have a larger capacitor buffering the rail the logger can run for more than 10 minutes after the battery is removed.
More Cr2023 Battery Testing
16x accelerated battery tests averaged about 1250 hours run time before hitting the BOD.
Ran a series of Cr2032 battery tests with these little loggers and was pleasantly surprised to find that even with the default BOD limiting us to the upper plateau of those lithium cells; we can still expect about two years of run time from most name brand batteries with a 200-400uF rail cap. Also keep in mind that all the units in the battery test had BOD’s below 2.8v – about 1 in 50 of the ProMini’s will have a high BOD at the maximum 2.9v value in the datasheet. It’s worth doing a burn test with the crappy HuaDao batteries to spot these high cutoff units more quickly so you can exclude them from deployment. We increased the sleep current for the accelerated test by leaving the LEDs on during sleep, but with a series of different resistors on the digital pins, this logger might be the cheapest way to simulate complex duty cycles for other devices.
Addendum: Build video (w EEprom Upgrade)
We finally released a full build tutorial on YouTube – including how to upgrade the default 4k EEprom with two stacked 64k chips:
Released the classroom version of this 2-module logger, with substantial code simplifications that make it easier to add new sensors and 2Module code build has been updated to match. This new variant has two breadboards supported on 3D printed rails so that sensor connections can quickly be changed from one lab activity to the next. The default code reads temperature via the RTC, and NTC thermistor, Light via an LDR and the Bh1750, and Pressure via a Bmp280. It also has support for a PIR sensor, and a mini-OLED display screen.
6pin Cp2102 UARTS are cheap, with good driver support, but you have to make your own crossover cable.
Macintosh users have been running into a very specific problem with this logger: their USB-c to USB-a adapter cables are smart devices with chips inside that will auto shut-down if you unplug them from the computer while they are connected to a battery powered logger. The VCC & GND header pins on the logger feed enough power/voltage back through the wires to make the chip in the dongle go into some kind of error state – after which it does not re-establish connection to the Mac properly until the adapter is completely de-powered. So you must unplug your loggers at the UART module to logger connection FIRSTinstead of simply pulling the whole string of still-attached devices out of the USBc port.
Last Word:
“If you need one, then you need two. And if you need two, you better have three.” The benefit of loggers this easy to produce is that you can dedicate one to each sensor, since the sensors often cost more than the rest of the unit combined. Then you can deploy duplicates to capture long time-series. This gives you redundancy in case of failure and makes it easier to spot when sensors start to drift. Deploying at least two loggers to every site also lets you use a trick from the old days when even the expensive commercial loggers didn’t have enough memory to capture an annual cycle: Set each logger to sample at twice the interval you actually want, and then stagger the readings (or set one of the logger clocks late by 1/2 of that interval). This way both loggers operate long enough to capture the entire dataset, and you can weave the readings from the two machines back together to get the higher sampling interval you originally wanted but did not have enough memory for. If one of the loggers fails you still get the complete season, but at the longer interval.
RH% gain (Y axis) over time (X axis) in submerged 30mL housings: The upper purple lines were controls with no desiccant added to the logger, the orange curve had 0.5 gram packet and the lowest blue curve had two 0.5 gram packets of small desiccant beads. So 1 to 1.5 grams are adequate for a typical one-year deployments with about 1 to 1.5% rise in RH% per month due to the vapour permeability of the centrifuge tubes. This test was done at typical room temps, but the rate increase bump near the end of the test was due to an 8°C rise – so the diffusion rate is temp dependant. BME280 sensors were used.
Dedicated loggers also provide the non-obvious benefit of reducing the potential for interference between sensors. Cross-talk is particularly common with water quality sensors because the water itself can form a circuit between them. It is nearly impossible to predict this kind of problem if all you did was benchtop calibration of the isolated sensors before your deployment. And even if your base code is robust, and you don’t have any weird ground-loops, it’s not unusual for sensor libraries to conflict with each other in a multi-sensor build.
Hopefully this new member of the Cave Pearl logger family goes some way toward explaining why we haven’t moved to a custom PCB: Using off-the-shelf modules that have global availability is critical to helping other researchers build on our work. And when you can build a logger in about 30 minutes, from the cheapest parts on eBay that still runs for a year on a coin cell – why bother? Bespoke PCBs are just another barrier to local fabrication, with potential for lengthy delays at customs to increase what may already be unpredictable shipping and import costs.
We’ve been having fun embedding these ‘ProMini-llennium Falcons’ into rain gauges and other equipment that predate the digital era. There’s a ton of old field kit like that collecting dust in the corner these days that’s still functional, but lacks any logging capability. Much of that older equipment was retired simply because the manufacturer stopped updating the software/drivers. While IOT visualization apps are all the rage in hobbyist electronics, they may end up creating similar dependencies that open source projects aiming for longevity should avoid. Not to mention the fact that those wireless packet transfers require a power budget orders of magnitude larger than the rest of the logger, while relying on back-end infrastructure that doesn’t exist in the parts of the world where more environmental monitoring is desperately needed.