Author Archives: edmallon

How to Normalize a Set of Pressure Sensors

Once your project starts to grow it’s common to have multiple different sensors, from different vendors, measuring the same environmental parameter. Ideally, those sensors would produce the same readings in the same environment – but in practice there are significant offsets. Datasheets for the MS5837-02BA and MS5803-14BA that we will compare in this post claim an accuracy of (±0.5mbar) and (±2ºC) for the 2-bar while the 14-bar sensors are only rated to (±20mbar) and (±2ºC). Sensors from Measurement Specialties are directly code compatible so the units here were read with the same Over Sampling settings.

Barometric pressure from a set of nine MS58xx pressure sensors running on a bookshelf as part of normal burn-in testing. The main cluster has a spread of about 10millibar, with one dramatic outlier >20 mbar from the group. These offsets are much wider than the datasheet spec for those 2-bars sensors.

But this is only a starting point: manufacturers have very specific rules about things like the temperature ramps during reflow and it’s unlikely that cheap sensor modules get handled that carefully. Housing installation adds both physical stress and thermal mass which will induce shifts; as can the quality of your supply voltage. Signal conditioning and oversampling options usually improve accuracy, but there are notable exceptions like the BMP280 which suffers from self-heating if you run it at the startup defaults.

As described in our post on waterproofing electronics, we mount pressure sensors under mineral oil with a nitrile finger cot membrane.

Sensors like NTC thermistors are relatively easy to calibrate using physical constants. But finding that kind of high quality benchmark for barometric sensors is challenging if you don’t live near a government-run climate station. So we typically use a normalization process to bring a set of different sensors into close agreement with each other. This is a standard procedure for field scientists, but it’s hard to find because the word ‘normalization’ means different things in various industry settings. In Arduino maker forums it usually describes scaling the axes from a single accelerometer with (sensor – sensor.min )/( sensor.max – sensor.min ) rather than standardizing a group of sensors.

When calibrating to a good reference you generally assume that all the error is in your cheap DIY sensor and then do a linear regression by calculating a best fit line with the trusted data on they Y axis of a scatter plot.  However, even in the absence of a established benchmark you can use the same procedure with a ‘synthetic’ reference created by drawing an average from a group of sensors:

Note: Sensor #41 was the dramatic outlier more than 20millibar from the group (indicating a potential hardware fault) so this data is not include in our initial group average.

With that average you calculate y = Mx + B correction constants using the slope & intercept functions. This lets you copy/paste equations from one data set to the next which dramatically speeds up the process when you are working through several sensors at a time. It also recalculates those constants dynamically when you add or delete information:

The next step is to calculate the difference between the raw sensor data and the average: before and after these Y=Mx+B corrections have been applied to the original pressure readings. These differences between the group average and an individual sensor should be dramatically reduced by the adjustment:

After you copy/paste these calculations to each sensor, create x/y scatter plots of the residuals so you can examine them:

While the errors are now centered around zero, these graphs indicate that we are not quite finished. In the ideal case, residuals are soft fuzzy distributions with no observable patterns. But here we have a zigzag that is showing up in all the sensors. This is an indication that one (or more) of our sensors have an issue. Scrolling further along the columns identifies the offending sensors with nasty looking residual plots even after the corrections have been applied:

Sensor #41 (far right) was already rejected from the general average because of its enormous offset, but the high amplitude residual plots indicate that the data from #45 and #42 are also suspect. If we eliminate those two from the average the zigzag pattern virtually disappears from the rest of the sensors in the set:

There’s more we could learn from the residual distributions, but here we’ve simply used them to prune our reference data, preventing bad sensor input from harming the the average we use for our normalization.

And what do the sensor plots look like after the magic sauce is applied?

The same set of barometric pressure sensors, before and after normalization corrections. (minus #41 which could not be corrected)

It’s important to note that there is no guarantee that fitting your sensors to an average will do anything to improve accuracy. However, sensors purchased from different vendors, at different times, tend to have randomly distributed offsets. In those cases normalization improves both precision and accuracy, but the only way to know if that has happened is to validate against some external reference like the weather station at your local airport. There are several good long term aggregators that harvest METAR data from these stations like this one at Iowa State, or you can get the most recent week of data by searching for your local airport code at weather.gov

METAR is a format for weather reporting that is predominately used for pilots and meteorologists and they report pressure adjusted to ‘Mean Sea Level’. So you will have to adjust the MSL data before you can compare it to the pressure reported by your sensors. You will also need to know the exact altitude of your sensors when the data was gathered to remove the height offset between your location and the airport station.

Technically speaking, you could calibrate your pressure sensors directly to those official sources. However there are a lot of Beginner, Intermediate and Advanced details to take care of. Even then you still have to be close enough to know both locations are in the same weather system.
Here I’m just going to use the relatively crude adjustment equation:
Station Pressure = SLP – (elevation/9.2) and millibar = inchHg x 33.8639 to see if we are in the ballpark.

Barometric data from the local airport (16 miles away) overlayed on our normalized pressure sensors. It’s worth noting that the airport data is at a strange odd-minute intervals, with frequent dropouts which could complicate a real calibration.

Like most pressure sensors an MS58xx also records temperature because it needs that for internal calculations. So we can repeat the entire process with the temperature readings from this set:

Temperatures °C from a set of MS58xx Pressure sensors: before & after group normalization. Unlike pressure, this entire band was within the ±2ºC specified in the datasheet.

These sensors were sitting pretty far back on a bookshelf that was partly enclosed, so some of them were quite sheltered while others were exposed to direct airflow. So I’m not bothered by the spikes or the corresponding blips in those residual plots. I’m confident that if I had run this test inside a thermally controlled environment (ie: a styrofoam cooler with a small hole in the top) the temperature residuals would have been well behaved and smooth.

One of the loggers in this set had a calibrated NTC thermistor onboard. While this sensor had significant lag because it was located inside the housing, we can still use it to check if the normalized temperatures benefit from the same random distribution of errors that were corrected so nicely by the pressure normalization:

Once again, we have good alignment between a trusted reference and our normalized sensors.

Comments:

Normalization is a relatively low effort way to improve sets of sensors – and it’s vital if you are monitoring systems that are driven primarily by deltas rather than absolute values. This method generalizes to many other types of sensors although a simple y=Mx +B approach usually does not handle exponential sensors very well. As with calibration, the raw data used for normalization should span the range of values you expect to gather with the sensors later.

The method described here only corrects differences in Offset [with the B value] & Gain/Sensitivity [the M value] – more complex methods are needed to correct non-linearity problems. To have enough statistical weight for accuracy improvement you want a batch of ten or more sensors and it’s a good idea to exclude data from the first 24 hours of operation so new sensors have time to settle. Offsets are influenced by several factors and some sensors need to ‘warm up’ before they can be read. The code driving your sensors during normalization should be identical to the code used to collect data in the field.

All sensor parameters drift so, just like calibration, normalization constants have a shelf life. This is usually about one year, but can be less than that if your sensors are deployed in harsh environments. Fortunately this kind of normalization is easy to redo in the field, and it’s a good way to spot sensors that need replacing.


References & Links:

Decoding Pressure @ Penn State
Environmental Mesonet @ Iowa State
Calibrating your Barometer: Part1, Part2 & Part3
ISA Standard Atmosphere calculator
Starpath SLP calculator
SensorsONE Pressure Calculators
Mean Sea Level Pressure converter

A practical method for calibrating NTC thermistors

This post describes a thermistor calibration achievable by people who don’t have access to lab equipment with an accuracy better than ±0.15°C. This method is particularly suitable for the 10k NTC on our 2-module data logger handling them in a way that is easy to standardize for batch processing (ie: at the classroom scale). We use brackets to keep the loggers completely submerged because the thermal conductivity of the water around the housing is required or the two sensors would diverge. The target range of 0-40°C used here covers moderate environments including the underwater and underground locations we typically deploy into. This method is unique in that it uses the freeze process rather than melting ice for the 0°C data point.

Use stainless steel washers in your hold-downs to avoid contamination of the distilled water and provide nucleation points to limit super-cooling. Before creating this bracket we simply used zip-ties to hold the washer weights.

Reading the thermistor with digital pins uses far less power, and gives you the resistance of the NTC directly from the ratio of two times. Resolution is not set by the bit depth of your ADC, but by the size of the reservoir capacitor: a small ceramic 0.1µF [104] delivers about 0.01°C with jitter in the main system clock imposing a second limit on resolution near 0.01°C when using a small res-cap. However, this calibration procedure will work no matter what method you use to read your NTC thermistor.

The I2C reference sensor is connected temporarily during the calibration via Dupont headers.

Off-the-shelf sensors can be used as  ‘good enough’ reference thermometers provided you keep in mind that most accuracy specifications follow a U-shaped curve around some sweet spot that’s been chosen for a particular application. The Si7051 used here has been optimized for the medical market, so it has ±0.1°C accuracy from 35.8 to 41°Celsius, but that falls to ±0.13 at room temperatures and only  ±0.25 at the ice point. If you use some other sensor (like the MAX30205 or the TSYS01) make sure it’s datasheet specifies how the accuracy changes over the range of temperatures you are targeting.

Likewise, the shortened three term Steinhart–Hart equation is not considered sufficiently accurate for scientific instruments which often use a four or five term polynomial. To calculate the equation constants you need to collect three temperature & resistance data pairs which can be entered into the online calculator at SRS or processed with a spreadsheet.

While these technical sources of error limit the accuracy you can achieve with this method, issues like thermal lag in the physical system and your overall technique are equally important. In general, you want each step of this process to occur as slowly as possible. If the data from a run doesn’t look the way you were expecting – then do the procedure over again until those curves are well behaved and smooth.

Data Point #1: The freezing point of water

The most common method of obtaining a 0°C reference is to place the sensor into an insulated cup of stirred ice slurry that plateaus as the ice melts. This is fine for waterproof sensors on the end of a cable but it is not easily done with sensors mounted directly on a PCB. So we immerse the loggers in a 1200ml silicone food container filled with distilled water. This is placed inside of a well insulated lunch box and the combined assembly is left in the freezer overnight, reading every 30 seconds.

Weighted holders keep each logger completely immersed. Soft-walled silicone containers expand to accommodate any volume change as the water freezes. This prevents the centrifuge tube housings from being subjected to too much pressure.
The outer lunch box provides insulation to slow the freezing process. After testing several brands it was found that the Land’s End EZ wipe and Pottery Barn Kids Mackenzie Classic boxes provided the best thermal insulation because they have no seams on the molded foam interior which doesn’t absorb water spilled while moving the container.

For the purpose of this calibration (at ambient pressure) we can treat the freezing point of pure water as a physical constant. So no reference sensor is needed on the logger while you collect the 0°C data. Leave the lunch box in the freezer long enough for a rind of ice to form around the outer edges while the main volume of water surrounding the loggers remains liquid. I left the set in this photo a bit too long as that outer ice rind is much thicker than it needed to be for the data collection.

The larger bubbles in this photo were not present during the freeze, but were created by moving the container around afterward.

The trick is recognizing which data represents the true freezing point of water. Distilled water super-cools by several degrees, and then rises to 0°C for a brief period after ice nucleation because the phase change releases 80 calories per gram while the specific heat capacity of water is only one calorie per degree per gram. So freezing at the outer edges warms the rest of the liquid – but this process is inherently self-limiting which gives you a plateau at exactly 0°C after the rise:

NTC (ohms) gathered during the freeze/thaw process with the y axis is inverted because of the negative coefficient. Several hours of warm temperature data has been removed from the graphs above to display only the relevant cold temperature data. Cooling the water from it’s initial room temperature starting point to the supercooling spike shown above took eight hours, and the complete thaw took another eight hours.

Depending on the strength of your freezer, and the quality of the outer insulating container, the ice-point may only last a few minutes before temperatures start to fall again. An average of the NTC readings from that initial peak is your 0°C calibration data point.  This is usually near 33000 ohms for a 10k 3950 thermistor. Only the data immediately after super cooling ends is relevant and the insulated box can be removed from the freezer any time after that.

If the supercooling spike is not obvious in your data then change your physical configuration to slow the cooling process until it appears. You want the inner surface of your silicone container to have smooth rounded edges, as sharp corners can nucleate the ice at 0°C, preventing the supercooling. Use as much water as the container will safely hold. You do not want the water to freeze solid as this will subject the loggers to stress that could crack the housings.

Unexpected thermal excursions may happen if the freezer goes into a defrost cycle or an automatic ice-maker kicks in during the run. If you put the box in the freezer between 6-7pm, it usually reaches the supercooling point around 2am, reducing the chance that someone will open the freezer door at that plateau.

Data Point #2:  Near 40°C

We have used the boiling point of water for calibration in the past, but the centrifuge tube housings would soften considerably at those temperatures. Ideally you want to bracket your data with equally spaced calibration points and 100°C is far from the conditions we monitor. Heated water baths can be found on eBay for about $50, but my initial tests with a Fisher Scientific IsoTemp revealed thermal cycling that was far too aggressive to use for calibration – even with a circulation pump and many layers of added insulation. So we created an inexpensive DIY version made with an Arctic Zone Zipperless Coldloc hard-shell lunch box and a 4×6 inch reptile heating mat (8 watt). Unlike the ice point which must be done with distilled water, ordinary tap water can be used to collect the warm data pairs.

These hard-sided lunch boxes can often be obtained for a few dollars at local charity shops.
Place the 8-watt heating pad under the hard shell of the lunch box. At 100% power this tiny heater takes ~24 hours to bring the bath up to ~38°C. The bath temp is relatively stable since the heater does not cycle, but it does experience a slow drift based on losses to the environment. The heating pads sell for less than $15 on Amazon.

To record the temperature inside each logger, an Si7051 breakout module (from Closed Cube) is attached to the logger. A hold down of some kind must keep the logger completely submerged for the duration of the calibration. If a logger floats to the surface then air within the housing can thermally stratify and the two sensors will diverge. That data is not usable for calibration so the run must be done again with that logger.

Data Point #3: Room Temperature

The loggers stay in the heated bath overnight, and then in the morning they are transferred to an unheated water-filled container (in this case a second Arctic Zone lunch box) where they run at ambient temperatures for another eight to twelve hours. This provides the final reference data pair:

Si7051 temperature readings inside a logger at a 30 second sampling interval. The logger was transferred between the two baths at 8am. Both baths are affected by the temperature changes in the external environment.
Detail: Warm temp. NTC ohms (y-axis inverted)
Detail: Room temp. NTC ohms (y-axis inverted)

As the environment around the box changes, losses through the insulation create gentle crests or troughs where the lag difference between the sensors will change sign. So averaging several readings across those inflection points cancels out any lag error between the reference sensor and the NTC. Take care that you average exactly the same set of readings from both the Si7051 and from the NTC data. At this point you should have three Temperature / Resistance data pairs that can be entered into the SRS online calculator to calculate the equation constants ->

I generally use six figures from the reference pairs, which is more than I’d trust in the temperature output later. I also record the Beta constants for live screen output because that low accuracy calculation takes less time on limited processors.

The final step is to use those constants to calculate the temperature from the NTC data with:
Temperature °C = 1/(A+(B*LN(ohms))+(C*(LN(ohms))^3))-273.15

Then graph the calculated temperatures from the NTC over top of the original reference data from your commercial sensor. Provided the loggers were completely immersed in the water bath, flatter areas of the two curves should overlap one another precisely. However, the two plots will diverge when the temperature is changing rapidly because the NTC exhibits more thermal lag than the Si7051. This is because the NTC is located near the thermal mass of the ProMini circuit board.

Si reference & NTC calculated temperatures: If your calibration has gone well, the curves should be nearly identical as shown above. With exceptions only in areas where the temperature was changing rapidly or when the logger was exposed in air.

Also note that the hot and warm bath data points can be collected with separate runs. In fact, you could recapture any data pair and recalculate the equation constants with two older ones any time you suspect a run did not go smoothly. Add the constants to all of the data column headers, and record them in a google doc with the three reference pairs and the date of the calibration. Re-run the calibration in a year. You can then apply compensation techniques to correct for sensor drift in your dataset.

Validation

You should always do a final test to validate your calibrations, because even when the data is good it’s easy to make a typo somewhere in the process. Here, a set of nine calibrated NTC loggers are run together for a few days in a gently circulated water bath at ambient temperature –>

(Click to enlarge)

Two from this set are a bit high and could be redone, but all of the NTC temperature readings now fall within a 0.1°C band. This is a decent result from a method you can do without laboratory grade equipment, and they could be brought even closer together by normalizing the set.

Comments

Calibrating the onboard thermistor a good idea even if you plan to add a dedicated temperature sensor because you always have to do some kind of burn-in testing on a newly built logger – so you might as well do something productive with that time. I generally record as much data as possible during the calibration to fill more memory and flag potentially bad areas in the EEprom. (Note: Our code on GitHub allows only 1,2,4,8, or 16 bytes per record to align with EEprom page boundaries) . And always look at the battery record during the calibration as it’s often your first clue that a logger might not be performing as expected. It’s also worth mentioning that if you also save the RTC temperatures as you gather the NTC calibration data, this procedure gives you enough information to calibrate that register. The resolution is only 0.25°C, but it does give you a way to check if your ‘good’ temperature sensors are drifting because the DS3231 tends to be quite stable.

For any sensor calibration the reference points should span the range you hope to collect later in the field. To extend this procedure for cold climates you could replace the ice point with the freezing point of Galinstan (-20°C) although a domestic freezer will struggle to reach that. If you need a high point above 40°C, you can use a stronger heat source. Using two of those 8 watt pads in one hard sided lunch box requires some non-optimal bending at the sides, but it does boost the bath temp to 50°C. 3D printed hold-downs will start to soften at higher temps so you may need to alter the design to prevent the loggers from popping out.

If your NTC data is so noisy you can’t see where to draw an average, check the stability of your regulator because any noise on the rail will affect the Schmitt trigger thresholds used by the ICU/timer. This isn’t an issue running from a battery, but even bench supplies can give you noise related grief if you’ve ended up with some kind of ground loop. You could also try oversampling, or a leaky integrator to smooth the data – but be careful to apply those techniques to both the reference and the NTC readings in exactly the same way because they introduce significant lag. Also note that the digital pin ICU based method for reading resistors does not work well with temperature compensated system oscillators because that circuitry could kick in between the reference resistor and NTC sensor readings.

And finally, the procedure described here is not ‘normalization’, which people sometimes confuse with calibration.  In fact, it’s a good idea to huddle-test your sensors in a circulating water bath after calibration to bring a set closer together even though that may not improve accuracy. Creating post-calibration y=Mx+B correction constants is especially useful when monitoring systems that are driven by relative deltas rather than by absolute temperatures. Other types of sensors like pressure or humidity have so much variation from the factory that they almost always need to be normalized before deployment – even on ‘commercial’ loggers. Normalize your set of reference sensors to each other before you start using them to calibrate your NTC sensors.


References & Links:

SRS Thermistor Constant Calculator
Steinhart-Hart Equation Errors BAPI Application Note Nov 11, 2015
The e360: A DIY Classroom Data Logger for Science
How to make Resistive Sensor Readings with DIGITAL I/O pins
How to Normalize a Set of Sensors

The e360: A DIY Classroom Data Logger for Science

2023 is the ten-year anniversary of the Cave Pearl Project, with hundreds of loggers built from various parts in the Arduino ecosystem and deployed for Dr. Beddows research. During that time her EARTH 360 – Instrumentation course evolved from using commercial equipment to having students build the entire platform for labs on environmental monitoring. The experience of those many first-time builders has been essential to refining our educational logger design so in recognition of their ongoing and spirited enthusiasm, we call this model the e360.

A standard 50mL centrifuge tube forms the housing, which is waterproof to several meters depth.

Many parallel trends have advanced the open-source hardware movement over the last decade, including progress towards inexpensive and (mostly) reliable 3D printing. In keeping with the project’s ethos of accessibility, we use an Ender 3 for the rails and you can download that printable stl file directly from Tinkercad. Tinkercad is such a beginner-friendly tool that students are asked to create a mounting bracket as part of the Lux/LDR calibration lab. This directly parallels our ever increasing use of 3D prints for equipment installations on the research side of the project.

A student-designed logger stand.

Cheap, simple, stand-alone loggers enable teaching and research opportunities that expensive, complex tools can not. However there are a few trade-offs with this minimalist design: Supporting only Analog & I2C sensors make the course more manageable but loosing the DS18b20, which has served us so well over the years, does bring a tear to the eye. Removing the SD card from the previous model means you have to think about memory constraints on run-time. The RTC’s one second minimum means this logger is not suitable for high frequency sampling – so you are not going to use it for experiments in eddy flux covariance or seismology. UV exposure makes the 50ml tubes brittle after about four months in full sun, and the coin cell limits operation to environments that don’t go much below freezing – although it’s easy enough to convert the logger to use two lithium AAA’s and we’ve tested those down to -15°C.

The lab kit:

The basic logger kit costs about $10 depending on where you get the parts. Pre assembly of the UART cable, sensor cluster & LED is optional depending on lab time. CP2102s are cheap, and have good driver support, but you do have to make that Dupont cable yourself.
Sensors we typically use: TTP233 touch, BMP280, BH1750, AM312 PIR, 1k &10k potentiometers and a sheet of metal foil for the capacitive sensor lab. Other useful additions are a Piezo Buzzer, a 0.49″ OLED and 32k EEproms. The screen is $4, but the other parts are about $1 each.

Expect 10-25% of the parts from cheap suppliers like eBay or Amazon to be high drain, or DOA. We order three complete kits per student to cover defects, infant mortality, and replacement of things damaged during the course. Many students build a second or third logger for their final project.

Assembling the logger:

This build is based on the 2-Module logger we released in 2022 with changes and additions to support the course labs. That post has extensive technical details on the logger core that have been omitted here for brevity. But it’s a good idea to read through that background material when you have time.

Modifications to the RTC module:

Clipping the Vcc leg (2nd leg in from the corner) forces the DS3231 to run from the coin cell, and disables the 32k output.
Disconnect the indicator LED by removing its limit resistor.
Remove the 200Ω charging resistor, and bridge Vcc to the battery power line at the black end of diode.

Cutting the VCC input leg forces the clock to run on VBAT which reduces RTC to <1µA sleep, but currents can spike as high as 650µA when a temperature reading occurs. If the time reads 2165/165/165 instead of the normal startup default of 2000/01/01 then the registers are bad and the RTC will not function. Bridging Vcc to Vbat means a 3.3V UART will drive some harmless reverse current through older coin cells while connected. Macintosh users have smart USBc to USBa adapters which will go into shutdown if they detect any power back-fed from the coin cell when they are not powered from the USBc side. So people using those dongles must first disconnect at the UART-logger point before removing the adapter from the USBc port on their computer. DS3231SN RTCs drift 2ppm = max 61 sec/year while -M variants drift 5ppm, or a max of 153 seconds/year. If the RTC temperature reading is off by more than the 3°C spec. then the clock may drift more.

It’s a good idea to do a breadboard test of those RTC modules (with the logger base-code) before handing them out.

Modify & TEST the Pro Mini:

A Pro Mini style board continues as the heart of the logger, because they are still the cheapest low-power option for projects that don’t require heavy calculations.

Carefully clip the two-leg side of the regulator with sharp side-snips and wobble it back and forth till it breaks away.
Remove the limit resistor for the power indicator LED with a hot soldering iron tip.
Clip away the reset switch. This logger can only be started with serial commands via a UART connection.
Add 90° UART header pins and vertical pins on D2 – D6. Also add at least one analog input (here on A3). Students make fewer soldering errors when there are different headers on the two sides of the board.
Bend the vertical pins inward at 45° and tin them for wire attachments later.
Do not progress with the build until you have confirmed the ProMini has a working bootloader by loading the blink sketch from the IDE.

Add the Sensors & LED:

These additions are optional, but provide excellent opportunities for pulse width modulation and sensor calibration activities.

Join a 10k NTC thermistor, a 5528 LDR, a 330Ω resistor and a 0.1µF [104] ceramic capacitor. Then heat shrink the common soldered connection.
Thread these through D6=LDR, D7=NTC, D8=300Ω, and the cap connects to ground at the end of the board. The D6/D7 connections could be any resistive sensors to a maximum reading of 65k ohms.
Solder the sensor cluster from the bottom side of the Pro Mini board and clip the tails flush with the board. Clean excess flux with alcohol & a cotton swab.
Add a 1k safety resistor to the ground leg of a common cathode RGB led. Extend the red channel with ~5 cm of flexible jumper wire.
Insert Blue=D10, Green = D11, GND = D12. Solder these from the under side of the Pro Mini and clip away the excess wire.
Bring the red wire over and solder it through D9. Note that if the RGB is not added, the default red LED on D13 can be used.

We have a separate post describing how to calibrate these NTC thermistors

Connect the Modules via the I2C Bus:

Use legs of a scrap resistor to add jumpers to the I2C bus connections on A4 (SDA) and A5 (SCL)
Cover those with small diameter heat-shrink and bend them so they cross over each other.
Use another scrap resistor to extend the Vcc and GND lines vertically from the tails of the UART headers. This is the most challenging part of the whole build for students!
Add a strip of double-sided foam tape across the chips on the RTC module and remove the protective backing.
Carefully thread the I2C jumpers though the RTC module.
Press the two modules together and check that the two boards are aligned.
Check that the two I2C jumpers are not accidentally contacting the header pins below, then solder all four wires into place on the RTC module.
Bend the GND wire to the outer edge of the module, and trim the excess from the SDA and SCL jumpers.
Solder a 1000µF [108J] tantalum capacitor to the VCC and GND wires. Clip away the excess wire.
Tin the four I2C headers on the RTC module and the SQW alarm output pin.
Join the RTC’s SQW output to the header pin on D2 with a short length of flexible jumper wire. At this point the logger core is complete and could operate as a stand-alone unit.
Bend the four I2C header pins up to 45 degrees.

As soon as you have the two modules together: connect the logger to a UART and run an I2C bus scanner to make sure you have joined them properly. You should see the DS3231 at 0x68, and the 4K EEprom at 0x57.

Add Rails & Breadboard Jumpers:

Clip the bottom posts away from two 25 tie-point mini breadboards.
Insert the breadboards in the rails. Depending on the tolerance of your print, this may require more force and/or a deburring tool.
The mounted breadboards should be flush with the upper surface of the rails. If the breadboards are too loose in your print they can be secured quickly with a drop of cyanoacrylate super-glue sprinkled with a little baking soda to act as an accelerant.
The 3D printed rails have a pocket cutout for the logger stack. The RTC module board should sit flush with the upper surface of the rail. CA glue can tack down the corners, OR:
THIN zip ties hold the logger stack in place. OR, use the legs of a scrap resistor as twist-ties if the holes on the RTC module are too small for your zips.
Check that the stack is in the pocket at both diagonal corners.
Cut two 15 cm lengths of 22AWG solid core wire. Insert stripped ends into the breadboards as shown, then route though the holes in the rail.
Secure the wires from the underside with a zip tie. Note: the ‘extra’ holes in the rail are used to secure small desiccant packs during deployment.
Route the solid core wires along the side of the breadboard and back out through the two inner holes near the logger stack.
The green wire should exit on the analog side of the Pro Mini and the blue wire should be on the digital side.
Route and trim the green wire to length for the A3 header.
Strip, tin and solder the wire to the A3 pin.
Repeat the process for the blue wire, connecting it to D3.
Extend the four I2C headers on the RTC module with 3.5cm solid core jumpers. Here, white is SDAta and yellow is SCLock.
Bend the jumpers into the breadboard contacts. The Bmp280 and Bh1750 sensor modules require this crossover configuration.

A video covering the full assembly process:

NOTE: For people with previous building experience we’ve also posted a 4 minute Rapid Review.

The Code [posted on GitHub]

In addition to the included NTC / LDR combination, the code has support for the Bmp280, Bh1750, and PIR sensors although you will need to install hp_BH1750 and BMP280_DEV via the library manager. Sensors are added / removed by uncommenting define statements at the beginning of the code. Each sensor enabled after the single-byte LowBat & RTCtemp defaults contributes two additional bytes per sampling event because every sensors output gets converted into a 16-bit integer.

The basic sensors cover light, temperature, and pressure – so you could teach an introductory enviro-sci course by enabling or disabling those sensors before each lab.

Bmp280 outputs can be saved individually. Total bytes per sampling record must be 1, 2, 4, 8 or 16 ONLY. You may need to add or remove RTC temp or Current Battery to make the byte total correct for a new sensor.

But limiting this tool to only the pre-configured sensors would completely miss the point of an open source data logger project. So we’ve tried to make the process of modifying the base-code to support different sensors as straight forward as possible. Edits are required only in the places indicated by call-out numbers on the following flowing charts. These sections are highlighted with comments labeled: STEP1, STEP2, STEP3, etc. so you can locate them with the find function in the IDE.

Those comments are also surrounded by rows of +++PLUS+++ symbols:
//++++++++++++++++++++++++++++++++++++++++++
// STEP1 : #include libraries & Declare Variables HERE
//++++++++++++++++++++++++++++++++++++++++++

In Setup()

2024 note: Additional start-menu options have been added since this graphic was created in 2023, and there are a few hidden debugging options that are not displayed in the menu.

A UART connection is required to access the start-up menu through the serial monitor in the IDE. This menu times-out after 8 minutes but the sequence can be re-entered at any time by closing and re-opening the serial monitor window. This restarts the Pro Mini if the UARTs physical connection to the DTR (data terminal ready) pin is good. The start-up menu should look similar to this:

If you see random characters in the serial window, you have the baud rate set incorrectly. Reset the baud to 500,000 and the menu should display properly HOWEVER you also need to close & re-open the window. If you Ctrl-A to copy data from the serial monitor when the window still has garbled characters displayed, then only the bad starting characters will copy out. On a new logger: Hardware, Deployment & Calibration info fields will display as a rows of question marks until you enter some text via each menu option.

The first menu option asks if you want to download data from the logger after which you can copy/paste (Ctrl[A]&Ctrl[C] then Ctrl[V]) everything from the serial window into a spreadsheet. Then, below the data tab in Excel, select Text to Columns to divide the pasted data at the comma separators. Or you can paste into a text editor and save a .csv file for import to other programs. While this transfer is a bit clunky, everyone already has the required cable and retrieval is driven by the logger itself. We still use the legacy 1.8x version of the IDE, but you could also do this download with a generic serial terminal app. You can download the data without battery power once the logger is connected to a UART. However, you should only set the RTC after installing a battery, or the time will reset to 2000/01/01 00:00 when the UART is disconnected. No information is lost from the EEprom when you remove and replace a dead coin cell.

A Unix timestamp for each sensor reading is reconstructed during data retrieval by adding successive second-offsets to the first record time saved during startup. It is important that you download any old data before changing the sampling interval because the interval stored in memory is used for the calculation that reconstructs each records time. This technique saves a significant amount of our limited memory and =(Unixtime/86400) + DATE(1970,1,1) converts those Unix timestamps into Excel’s date-time format. Valid sampling intervals must divide evenly into 60 and be less than 60. Short second-intervals are supported for rapid testing & debugging, but you must first enter 0 for the minutes before the seconds entry is requested. The unit will keep using the previous sampling interval until a new one is set.

The easiest way to measure the rail is at the metal springs in the Dupont cable.

Vref compensates for variations in the reference voltage inside the 328p processor. Adjusting that constant up or down by 400 raises/lowers the reported voltage by 1 millivolt. Adjust this by checking the voltage supplied by your UART with a multimeter while running the logger with #define logCurrentBattery enabled and serial output Toggled ON. Note the difference between the millivolts you actually measured and the battery voltage reported on screen, and then multiply that difference by 400 to get the adjustment you need to make to the default 1126400 vref for accurate battery readings. Save this new number with the [ ] Change Vref start menu option and re-run the test until the numbers on screen match what you measure with the DVM. This adjustment procedure only needs to be done once as the number is stored in the 328p EEprom for future use. Most loggers run fine with the default vref although some units will shutdown early because they are under-reading. It’s rare to get two constants the same in a classroom of loggers so you can use student initials + vref as unique identifiers for each logger. However if you do get a couple the same you can change the last two digits to to make unique serial numbers without affecting the readings. The battery readings have a resolution limit of 11millivolts, so that’s as close as you can get.

After setting the time, the sampling interval, and other operating parameters, choosing [ ] START logging will require the user to enter an additional ‘start’ command. Only when that second confirmation is received does the storage EEprom get erased by pre-loading every memory location with zero. Red&Blue LEDs then flash to indicate a synchronization delay while the logger waits for the first alarm to align with the current minute/hour. A zero-trap is required on the first byte of each record because the preloaded zeros also serve as the End-Of-File markers during download. If you leave the default LogLowestBattery enabled that is already taken care of.

In the main LOOP()

If all you do is enable sensors via defines at the start of the program you won’t have to deal with the code that stores the data. However to add a new sensor you will need to make changes to the I2C transaction that transfers those sensor readings into the EEprom (and to the sendData2Serial function that reads them back out later). This involves dividing your sensor variables into 8-bit pieces and adding those bytes to the wire transfer buffer. This can be done with bit-math operations for long integers or via the lowByte & highByte macros for 16-bit integers. The general pattern when sending bytes to an I2C EEprom is:

Wire.beginTransmission(EEpromAddressonI2Cbus); // first byte in I2C buffer
Wire.write(highByte(memoryAddress)); // it takes two bytes to specify the
Wire.write(lowByte(memoryAddress)); // memory location inside the EEprom

loByte = lowByte(SensorReadingVariable);
Wire.write(loByte); // adds 1st byte of sensor data to wire buffer
hiByte = highByte(SensorReadingVariable);
Wire.write(hiByte); // adds 2nd byte of sensor data to the buffer

–add more Wire.write statements here as needed for your sensor–
// You can add a total of 1, 2, 4, 8 or 16 DATA bytes to the I2C transaction. Powers of two increments are required because the recorded data must align with page boundaries inside the EEprom.

Wire.endTransmission(); // Only when this command executes does the data accumulated in the wire buffer actually get sent to the EEprom

The key insight here is that the wire library is only loading a memory buffer until Wire.endTransmission() is called. It does not matter how much time you spend doing calculations, or parsing variables, so long as you don’t start another I2C transaction while you are still in the middle of this one. Once that buffer is physically sent over the wires, the EEprom enters a self-timed writing sequence and the logger reads the rail voltage while the CR2032 is under load. This is the only way to accurately gauge the state of lithium coin cell batteries.

The data download function called in setup retrieves those separate bytes and concatenates them back into the original integers. The sequence of operations in the sendData2Serial function must exactly match the byte-order used to load the EEprom in the main loop.

Various Sensor Options:

By default, the logger records the RTC temperature at 0.25°C resolution and the battery voltage under load. These readings are compressed to only one byte each by scaling with a fixed offset. This allows ~2048 readings to be stored on the built-in 4k EEprom which allows 21 days of operation at a 15-minute sampling interval.

A typical RTC temp record from a logger installed in a cave early in the project. The datasheet spec is ±3°C, but most are better than ±0.5 near 25°C. The RTC updates its temperature register every 64 seconds.

That 4k fills more quickly if your sensors generate multiple 2-byte integers but larger 32k (AT24c256) EEproms can easily be added for longer running time. These can be found on eBay for ~$1 each and they work with the same code after you update the EEpromI2Caddr & EEbytesOfStorage #defines at the start of the program.

Header connections on this Bmp280 sensor match those on the 32k module. So the two boards can be soldered to the same set of double-length pins.
Vertical stacking allows several I2C modules to fit into the 50 mL tube. Any I2C sensor breakouts could be combined this way if they have different bus addresses.

Pullup resistors can be left on the sensor modules as the logger will operate fine with a combined pull below 2k. No matter what sensor you enable, always check that the total of all bytes stored per pass through the main loop is 1,2,4,8 or 16 or you will get a repeating data error whenever the bytes transmitted over the I2C bus pass a physical page boundary inside the EEprom. This leads to a wrap-around error which over-writes data at the beginning of that page/block.

Perhaps the most important thing to keep in mind is that breadboards connect the sensor module headers via tiny little springs which are easily jiggled loose if you bump the logger. Some sensors can handle this momentary disconnection but many I2C sensors require full re-initialization or they will not deliver any more data after a hard knock. So handle the logger gently while it’s running – no tossing them in a backpack full of books until after you’ve hard-soldered the sensor connections.

The code also includes the DIGITAL method we developed to read the NTC/LDR sensors. On this new build we used the internal pullup resistor on D8 as a reference to free up another digital pin. The blue jumper wire on D3 (the 2nd external interrupt) can wake the logger with high / low signals. This enables event timing and animal tracking. Pet behavior is a popular theme for final student projects.

The TTP233 can detect a press through 1-2mm of flat plastic but it does not read well through the curved surface of the tube. In open air it triggers when your finger is still 1cm away but the sensitivity can be reduced by adding a trimming capacitor.
The AM312 draws <15µA and has a detection range of ~5m through the centrifuge tube. This sensor has a relatively long 2-4 second reset time and will stay high continuously if it gets re-triggered in that time. Our codebase supports counting PIR detections OR using the PIR to wake the logger for other sensor readings instead of the standard RTC alarm.

These 0.49″ micro OLEDs sleep at only 6µA and usually draw less than a milliamp at 50% contrast. However, like all OLEDs they put wicked charge-pump spikes on the rail. A 220µF tantalum right next to them on the breadboard suppresses that noise completely. Sleep the CPU while the pixels are turned on to lower the current load on the coin cell.

These displays run about two weeks on a coin cell at 15 minute intervals, depending on contrast, pixel coverage, and display time. It might also be possible to depower them when not in use with a mosfet like the TN0702N3.
These micro OLEDs are driven by a 1306 so you can use standard libraries like SSD1306Ascii. They only display a weirdly located sub-sample of that controllers 1k memory – so you have to offset the origin on your print statements accordingly.

While I2C sensors are fun, we should also mention the classics. It is often more memorable for students to see or hear a sensors output, and the serial plotter is especially useful for lessons about how noisy their laptop power supply is…

If you twist the legs 90°, a standard potentiometer fits perfectly into the 25 tie-point breadboard for ADC controlled rainbows on the RGB LED.
Light-theremin tones map onto the squawky little Piezo speaker and alligator clips make it easy to try a variety of metal objects in the Capacitive Sensing lab.

Unless you are run a lab tethered to the UART for power, any sensors you add have to operate within the current limitations of the coin cell powering the logger. This means they should take readings below 2mA and support low-power sleep modes below 20µA (ideally < 2µA). A GPS module or a CO2 sensor usually requires too much power, so to use those you will need one of the previous AA powered loggers from the project.

Logger Operation:

The logger typically sleeps between 5 – 10µA with a sensor module attached. Four 5mA*30millisecond sensor readings per hour gives an estimated battery lifespan of about one year. So the logger is usually more limited by memory than the 100mAh available from a Cr2032. The tantalum rail-buffering capacitor extends operating life about 20% under normal conditions, but it becomes much more important with power hungry sensors or in cold environments where the battery chemistry struggles:

A BMP280 sampling event with NO rail buffering capacitor draws a NEW coin cell down about 100mv during the EEprom save…
…while the voltage on an OLD coin cell falls by almost 200 millivolts during that same event on the same logger – again with NO rail buffer cap.
Adding a 1000µF [108j] tantalum to that same OLD battery logger supports the coin cell, so the recording event now drops the rail less than 40mV.

The code sleeps permanently when the battery reading falls below that defined at systemShutdownVoltage which we usually set at 2.8v because many 328p chips trigger their internal brown-out detector circuit at 2.77v. And the cheap old-stock I2C EEproms you get from Ebay often have a lower operational limit at 2.7v.

When testing sleep current on a typical batch of student builds, some will appear to have anomalously high sleep currents in the 600-700µA range. Often that’s due to the RTC alarm being on (active low) which causes a constant drain through the 4k7 pullup resistor until the alarm gets turned off by the processor. Also, the 1000uF tantalum capacitors have a ‘saturation period’ where they have to be exposed to voltage for a several hours before they settle down to their typical 2µA leakage current. It is worth mentioning that tantalum capacitors are heat sensitive, so beginners can damage them while soldering. A typical logger should sleep below 5µA, and if they are consistently 10x that, replacing an overheated rail capacitor may bring that down to the expected sleep current. Also check that the ADC is properly disabled during sleep, as that will draw ~200µA if it is left on while sleeping. The Brown Out Detector typically draws 20µA if that is left on during sleep.

However, a few loggers will end up with hidden solder bridges that require a full rebuild. This can be emotionally traumatic for students until they realise how much easier the process is the second time round. Once you’ve made a few, a full logger can usually be assembled in about 1.5 hours. It’s even faster if you make them in batches so that you have multiple units for testing at the same time.

Running the labs:

The small form factor of the e360 enables new benchtop exercises like this Mason jar experiment. The loggers are tightly sealed in a jar taken directly from the freezer with a BMP280 sampling every 15 seconds. The jars are then placed in front of a fan which brings them to room temp in about 45 minutes.

Through no fault of their own, students usually have no idea what messy real-world data looks like, and many have not used spreadsheets before. So you will need to provide both good and bad example templates for everything, but that’s easy enough if you ran the experiment a dozen times at the debugging stage.

Even then students will find creative ways to generate strange results by using a cactus for the evapotranspiration experiment, or attempting the light sensor calibration in a room that never rises beyond 100 Lux. When students are doing a project it is critical that you make them download and send graphs of the data they’ve captured every day for feedback. Otherwise they they will wait until the hour before a major assignment is due before finding out that their first (and sometimes only) run didn’t give them any useable data. Your grading rubric has to be focused on effort and understanding rather than mechanical results, because the learning goals can still be achieved if they realize where things went wrong.

The temperature readings have serious lag issues while the pressure readings do not. A good lesson in thinking critically about the physical aspects of a system before trusting a sensors output. With the built-in 4096 byte EEprom, saving all three 2-byte Bmp280 outputs (temp, pressure & altitude) plus two more bytes for RTCtemp & battery, gives you room for 512 of those 8-byte records. If you sample every fifteen seconds, the logger will run for two hours before the RTC’s 4k memory is full.

Important things to know:

Time: You need to start ordering parts at least three months ahead of time. Technical labs take a week to write, and another week for debugging. You can expect to spend at least an hour testing components before each lab. The actual amount of prep also depends on the capabilities of your student cohort, and years of remote classes during COVID lowered that bar several pegs. Have several spare ‘known good’ loggers on hand to loan out so hardware issues don’t prevent students from progressing through the lab sequence while they trouble-shoot their own build. Using multi-colored breadboards on those loaners makes them easy to identify later. Measuring logger sleep current with a DSO 138 scope ,a µCurrent or a Current Ranger will spot most hardware issues early, but students don’t really get enough runtime in a single course to escape the bathtub curve. On the research side of the project, we run our loggers to full memory shutdown several times, over a couple of weeks, before they are considered ready to deploy. In addition to data integrity, we also look for smooth predictable battery burn-down curves.

Yes, some of that student soldering will be rough. But my first kick at the can all those years ago wasn’t any better and they improve rapidly with practice. As long as the intended electrical contact is made without sideways bridges, the logger will still operate.

Money: Navigating your schools purchasing system is probably an exercise in skill, luck and patience at the best of times. Think you can push through dozens of orders for cheap electronic modules from eBay or Amazon? Fuhgeddaboudit! We have covered more than half of the material costs out of pocket since the beginning of this adventure, and you’ll hear that same story from STEM instructors everywhere. If you can convince your college to get a class set of soldering irons, multimeters, and perhaps a 3D printer with some supplies, then you are doing great. We bought nice DVMs at the beginning but they all got broken, or grew legs before we got enough course runs with them. We now use cheap DT830’s and design the labs around it’s limitations. Smaller tools like wire-strippers and side-snips should be considered consumables as few live long enough to see a second class. Cheap soldering irons can now be found for ~$5 (which is less than tip replacement on a Hakko!) and no matter which ones you get the students will run them dry every time. The up-side of designing a course around the minimum functional tools is that you can just give an entire set to any students who want to continue on their own after the course. That pays dividends later that are worth far more than any one years budget.

All that probably sounds a bit grim, but the last thing we want is for instructors to bite off more than they can chew. You will need to noodle with these loggers for a few months before you are ready to integrate them into your courses. Not because any of it is particularly difficult, but because you will need to work with them before you realize the many different ways this tool can be used. A summer climate station project with five to ten units running in your home or back yard is a great way to start and, if you do invest that time, it really is worth it:

The build lab at the beginning of the course – with everybody still smiling because they still have no idea what they are in for. The course is offered to earth & environmental science students – not engineers!

Zigging while others zag:

Why did we spend ten years developing this DIY logger when the market is already heaving with data acquisition equipment? Our primary issue with polished educational products using pre-written software is that they are usually plug-and-play. The last thing you want in higher education is something that black-boxes data acquisition to the point that learners become mere users of the technology. While companies boast that students can take readings without hassle and pay attention only to the essential concepts of an experiment, that has never been how things work in the real world. Trouble shooting by process-of-elimination, combined with modest repair skills often makes the difference between a fieldwork disaster and a resounding success. So sanitized science equipment that generates uncritically trusted numbers just isn’t compatible with problem-based learning. Another contrast is the sense of ownership & accomplishment that becomes clear when you realize how many students gave their DIY loggers names, and then displayed them proudly in their dorm after the course. That’s not something you’ll buy off a shelf.




References:
ATmega328P Datasheet
Waterproofing your Electronics Project
Oregon Embedded Battery Life Calculator & Cr2032 testing
Nick Gammon: An excellent technical information source
A practical method for calibrating NTC thermistors

Testing Cr2032 Coin Cell Batteries with μA to mA pulsed duty cycles

Cr2032 Internal Resistance vs mAh [Fig6 from SWRA349] Our peak load of ~8 mA while writing data to the EEprom creates a voltage drop across the battery IR. The load induced transient on the 3v Cr2032 can’t fall below 2.775v or the BOD halts the 328p processor. This limits our useable capacity to the region where battery IR is less than 30 ohms. This also makes it critical to control when different parts of the system are active to keep the peak current as low as possible.

Reviewers frequently ask us for estimates based on datasheet specifications but this project is constantly walking the line between technical precision and practical utility. The dodgy parts we’re using are likely out of spec from the start but that’s also what makes our 2-module data loggers cheap enough to deploy where you wouldn’t risk pro-level kit. And even when you do need to cross those t’s and dot those i’s you’ll discover that OEM test conditions are often proscribed to the point of being functionally irrelevant in real world applications. The simple question: “How much operating lifespan can you expect from a coin cell?” is difficult to answer because the capacity of lithium manganese dioxide button cells is nominal at best and wholly dependent on the characteristics of the load. CR2032’s only deliver 220mAh when the load is small: Maxell’s datasheet shows that a 300 ohm load, for a fraction of a second every 5 seconds, will drop the capacity by 25%. But if the load falls below 3μA then the battery develops high internal resistance, reducing the capacity by more than 70%.

Voltage Under EEprom load VS date [runtime hours in legend] with red LED on D13 driven HIGH for 1.4mA sleep current, 30second interval, 8-byte buffer. These are serial tests performed on the same logger. 1.4mA continuous is probably is not relevant to our duty cycle.

Surprisingly little is known about how a CR2032 discharges in applications where low μA level sleep currents are combined with frequent pulse-loads in the mA range; yet that’s exactly what a datalogger does. Normal run tests take so long to complete that you’ve advanced the code in the interim enough that the data is stale. Another practical consideration is that down at 1-2μA: flux, finger prints, and even ambient humidity skew the results in ways that aren’t reproducible from one run to the next. So a second question is “How much can you accelerate your test and still have valid results?” Datasheets from Energiser, Duracell, Panasonic and Maxell reveal a common testing protocol using a 10-15kΩ load. So continuous discharges below 190μA shouldn’t drive you too far from the rated capacity. Unfortunately, that’s well below what you get with affordable battery testers, or videos on YouTube, so we are forced yet again to do empirical testing.

The easiest way to change our base load is to leave the indicators on: all three LEDs will add ~80μA to the sleep current when lit using internal the pullup resistors. 80μA is ~16x our normal 5μA sleep current (including RTC temp conversions). A typical sampling interval for our work is 15min so changing that to 1 minute gives us a similarly increased number of EEprom saves. With both changes, we tested several brands to our 2775mv shut-down:

Cr2032 Voltage Under EEprom Load VS Date: Accelerated Cr2032 run tests with 3xLED lit with INPUT_PULLUP for ~80μA sleep current although each unit was slightly different as noted, 1min sampling interval. Blue Line = Average excluding Hua Dao. CLKPR reduced system clock to 1MHz during eeprom save on this test to reduce peak currents to about 6mA.
BrandRun Time (h)Cost / Cell|BrandRun Time (h)Cost / Cell
Panasonic1500$ 1.04|Duracell1298$ 1.81
Voniko1448$ 0.83|AC Delco1223$ 0.79
Maxell1444$ 0.58|Nightkonic1186$ 0.24
Toshiba1325$ 0.64|MuRata1175$ 0.45
Energiser1307$1.28|Hua Dao642$ 0.14
Note: With the slight variation between each loggers measured sleep current, the times listed here have been adjusted to a nominal 80μA. Also note that the price/cell is highly dependant on vendor & quantity.

Despite part variations these batteries were far more consistent on that 20 ohm plateau than I was expecting. This 16x test gives us a projected runtime of more than two years! That’s twice the estimate generated by the Oregon Embedded calculator when we started building these loggers. We did get a 30% delta between the name brands, but these tests were not thermally controlled and we don’t know how old the batteries were before the test. The rise in voltage after that initial dip is probably the pulse loads slowly removing the passivation layer that accumulates during storage. The curves are a bit chunky because the 328P’s internal vref trick has a resolution of only 11mv, and we index-compress that to one byte which results in only 16mv/bit in the logs.

One notable exception is the no-name Hua Dao cells, which I tested because, at only 14¢ each, they are by far the cheapest batteries on Amazon. We have many different runs going at any one time, and to make those inter-comparable you need to start each test with a fresh cell. Even if the current run test doesn’t need a batteries full capacity sometimes you just need to eliminate that variable while debugging. You also use a lot of one-shots for rapid burn-in tests so it makes sense for them to be as cheap as possible. Now that I know Hua Dao delivers less than half the lifespan of name brand cells, I can leverage that fact to run some of the tests more quickly. I had planned on doing this with smaller batteries but the Rayovac Cr2025 I tested ran for 1035 hours – much longer than the Hua Dao Cr2032!

Cr2032’s used since January for bench testing.

Testing revealed another complicating factor when doing battery tests: With metal prices sky-rocketing, fake lithium batteries are becoming more of a problem. We’ve been using Sony Cr2032’s from the beginning of the project but the latest batch performed more like the Hua Dao batteries. This result was so unexpected that I dug through the bins for some old stock to find that the packaging looked different:

Fake (left) vs Real (right)

On closer inspection it didn’t take long to spot the fraud:

Fake Sony Battery : laser engraved logo
Real Sony Battery: Embossed logo

More tests are under way so I’ll add those results to this post when they are complete. A couple of the 80μA units have been re-run after removing the 227E 25V 220μF rail buffering caps, confirming that the tantalum does not extend overall run time on good quality batteries very much because their internal resistance rises very slowly, but they can more than double the lifespan with low quality batteries like the Hua Dao. 1000μF rail caps have higher 1μA leakage so they start reaching the point of diminishing returns for long deployments: only adding about 35% to total runtime unless you have a high drain sensor. It’s also worth mentioning that the spring contacts on those RTC modules are quite weak and may need a bit of heat shrink tubing behind them to strengthen the connection to the flat surface of the coin-cell.

Northern caves hold near 5°C all year round, so the current set is running in my refrigerator. I will follow that with hotter runs because both coin cell capacity AND self-discharge are temperature dependant. We also plan to start embedding these loggers inside rain gauges which will get baked under a tropical sun .

Addendum 2023-08-01

This summers fieldwork required all of the units in my testing fleet so I only have a handful of results from the refrigerator burn down tests [at an average temp of 5°C]. The preliminary outcome is that, compared to the room temperature burns, the lithium cell plateau voltage lowers between 80-100mv (typically from 2995mv to 2890mv). Provided the loggers were reading a low drain sensor the ‘cold’ lifespan was only about 20% shorter because the normal 50-70mv (sensor reading / eeprom save) battery droop only becomes important after the battery falls off its 20 ohm plateau. This is approximately the same lifespan reduction you see running at room temp without a rail buffering capacitor – as the buffer also only comes into play when the battery voltage is descending. This is also the reason why the larger 1000μF rail capacitors usually only provide about 15-20% longer life than the 220μF rail caps as the reduced battery droop with the larger cap only matters when the cell is already nearing end of life. Net result is that increasing to 1000μF rail buffer almost exactly offsets the lifespan losses at colder ambient temps around 5°C . But at normal room temps the 2μA leakage of a 6v 1000μF [108j] tantalum removes most of it’s advantage over the 25v 220μF [227e] which has ~5nA leakage at 3 volts. And whenever you see anomalously high sleep currents on a logger, your first suspect should be a defective or over-heated tantalum rail buffering capacitor. Also note that some caps seem to need a few hours to ‘burn in’ before they are saturated enough to measure their leakage properly. A final gotcha to be aware of is that some DS3231 RTCs will assert an alarm on SQW even if the alarm enable bits in the register have been properly cleared. This will draw a constant 680-700μA through the 4k7 pullup resistor on the module until an I2C bus transaction sets a new alarm or halts the RTC oscillator.

The freezer results are an entirely different situation where are only seeing a few days of accelerated 100μA sleep current operation because at -15°C the coin cell plateau is below our 2775mv shutdown cutoff. So the logger only operates for the brief span of time where the new CR2032 is still above its rated nominal voltage. Even the 1000μF cap will not fix that problem – you need a different battery chemistry. With the loggers drawing the normal 2-5μA sleep current they run ok in the freezer, in fact we use waters 0C phase transition as a physical reference when calibrating onboard NTC thermistors.

A DIY Pressure Chamber to Test Underwater Housings

Pressure testing has been on the to-do list for ages, but the rating on the PVC parts in our older housings meant we weren’t likely to have any issues. However, the new two-part mini-loggers fit inside a thin walled falcon tube, which raised the question of how to test them. There are a few hyperbaric test chamber tutorials floating around the web, and we made use of one built from a scuba tank back at the start of the project, but I wanted something a less beefy, and easier to cobble together from hardware store parts. Fortunately Brian Davis, a fellow maker & educator, sent a photo of an old water filter housing he’d salvaged for use with projects that needed pressure tests. Residential water supply ranges from 45 to 80 psi so could replicate conditions down to 55m. That’s good for most of our deployments and certainly farther than I was expecting those little centrifuge tubes to go.

This mini pressure chamber was made from a Geekpure 4.5″x10″ water filter housing, 2x male-male couplers, a garden tap, & a pressure gauge with a bicycle pump inlet. (~$70 for this combination) The relief valve & o-ring required silicone grease to maintain pressure.

I first tested 50mL ‘Nunc’ tubes from Thermo. These are spec’d to 14psi/1atm, but that’s a rating under tension from the inside. I put indicator desiccant into each tube so small/slow leaks would be easy to see and used a small bicycle pump to increase the pressure by 5psi per day. These tubes started failing at 25psi, with 100% failure just over 30psi. Multiple small stress fractures occurred before the final longitudinal crack which produced an audible ‘pop’ – often four or more hours after the last pressure increase. If 20psi is the max ‘safe’ depth for these tubes then the 50mL tubes can deployed to about 10m with some safety margin for tides, etc. This result matches our experience with these tubes as we often use them to gab water samples while diving.

[Click photos to enlarge]

As expected, the self-standing 30mL tubes proved significantly more resistant. All of them made it to 45psi and then progressed through various amounts of bending/cracking up to 100% failure at 55psi. Where the caps were reinforced (by JB weld potting a sensor module) the rim threads of the cap sometimes split before the tube itself collapsed:

Silicone grease was added to some of the caps although none of the dry ones leaked before the bodies cracked.

So the 30mL tubes have a deployment range to 25m with a good safety margin. The plastic of these tubes was somewhat more flexible with some crushing almost flat without leaks. This implies we might be able take these a little deeper with an internal reinforcement ring (?)

The next experiment was to try filling the tubes with mineral oil to see how much range extension that provides:

A third logger was submerged using only a sample bag:

The bag was included to test the ‘naked’ DS3231 & 328p chips. We’ve had IC sensors fail under pressure before (even when potted in epoxy ) Although it’s possible the encapsulation itself was converting the pressure into other torsional forces that wouldn’t have occurred if the pressure was equally distributed.

Again we moved in 5 psi increments up to 80 psi – which is the limit of what I can generate with my little bicycle pump. At 50psi some mineral oil seeped from the bag and at 70psi the ~1cm of air I’d left in the 50mL tube caused similar leakage. On future tests I will spend more time to get rid of all the bubbles before sealing the housings.

At 70psi the 50mL tube dented & sank and the lid started seeping oil (but did not crack)

The loggers continued blinking away for several days at 70, 75 & 80psi, but eventually curiosity got the best of me so I terminated the run. We were also getting uncomfortably close to the 90psi maximum test pressure on that polycarbonate filter housing. I was hoping to have some weird artifacts to spice up this post but no matter how hard I squint there really were no noticeable effects in the data at any of the pressure transitions – basically nothing interesting happened. I thought the resistive sensors would be affected but the RTC & NTC temperature logs have no divergence. The LDR looks exactly like a normal LDR record with no changes to the max/mins outside of normal variation. The battery curves are smooth and essentially indistinguishable from ‘dry’ bookshelf tests on the same cells. But I guess in this kind of experiment success is supposed to be boring… right? With mineral oil these little guys can go anywhere I can dive them to – even if the ‘housing’ is little more than a plastic bag.

One thing of note did happen after I removed the loggers from the chamber: I accidentally dropped the 30ml logger on the counter while retrieving it from the chamber and a thin white wisp of ‘something’ started swirling around the clear fluid inside the logger. This developed slowly and my first guess was that the capacitor had cracked and was leaking (?)

By the time I managed to capture this photo, the fine ‘smoke’ seen earlier had coalesced into a larger foam of decompression bubbles.

After emptying that oil, the logger itself went into a red D13 flashing BOD loop for a while but by the time I’d cleaned it up enough to check the rail, the battery had returned to it’s nominal 3v. My theory is a similar off-gassing event was happening inside the battery – briefly causing a droop below the 2.7v BOD threshold. So it’s possible that while the loggers are not depth limited per se using mineral oil, components like the separator in a battery may still be vulnerable to ‘rate-of-change’ damage. After more than two weeks at depth, I had vented the chamber in less than a minute. Of course when retrieving loggers in the real world I’d have to do my own safety stops, so this hazard may only affect loggers that get deployed/retrieved on a drop line.

I’ll run these loggers on the bookshelf for a while to see if any other odd behaviors develop. After that it will be interesting to see how well I can clean them in a bath of isopropyl (?) as I suspect that the mineral oil penetrated deep into those circuit board layers.

Addendum: 2023-05-30

Although the units sleep current was the same as before the pressure testing, the battery in the 30mL tube barely made another twelve hours on the bookshelf before the voltage dropped again – well before the expected remaining run time. So it’s a safe bet that any deployment which exposes coin cells to pressure at depth is a one-shot run. Given how cheap these batteries are, that’s pretty much a given when deploying these little loggers even if they remain dry.

Addendum: 2023-12-01

Short 30ml tubes work well for single-sensor applications, but classroom labs needed to switch between different sensor modules easily. So we added 3D printed rails holding mini breadboards to provide this flexibility, and the 50mL centrifuge tubes provide the space for these additions. They may not have the same depth range, but they are robust enough for most student experiments.

“Too Ugly to Steal & Too Heavy to Carry” : Insights from a decade of rain gauge deployment

A typical climate station from our project with other sensors protected from the direct sun under the bricks. Those loggers get checked carefully because scorpions are particularly fond of these brick stacks.

Most experiments require weather information to put environmental trends into context. So even though the majority of our sensor network is under ground, or under water, each study area includes a climate station on the surface. Our field sites are rarely close enough to government stations for their data to reflect local conditions because the official stations are spatially biased toward population centers and coastlines. As a result, we operate about ten weather stations and of the sensors they contain, tipping bucket rain gauges (TRGs) can be challenging to maintain at stations that only get serviced once or twice a year.

Where to spend your money

A fieldwork photo from early in the project when we were trying many different rain gauge designs. The aluminum funnels at the back are field repairs after the originals became brittle & cracked. Over time, this happens to all of our plastic funnel gauges. It’s worth noting that those aluminum funnels also corrode with organic acids from debris, but that takes 3-4 years instead of just months.

EVERYTHING exposed to full sunlight must be made of metal if you want it to last more than a year. I know there are plenty of tempting rain gauge designs at Thingiverse, but we’ve yet to see even hardened 3Dprints stand up to tropical conditions. This is also true for Stevenson screens, where I’d recommend a stack of metal bowls on stainless threaded rods (like that used by the Freestation project) over most of the pre-made ones on the market. Local varmints love using climate stations as chew toys.

A typical station ready to deploy: Left: Hobo/Onset RG2 and right is the older 6″ Texas Electronics gauge it was based on. The separate loggers recording each TRG also record pressure & temp. The central logger records RH%, but RH sensors are so prone to failure that we no longer combine them with anything else. During installation, washers can be added for leveling where the gauges are bolted to the brick.

If you need one, then you actually need two. So long as you follow that first rule, it’s better to install two medium quality gauges rather than a single new one that eats your budget. When you’re replacing four to six gauges per year, lighter six inch diameter units are much easier to transport. Be sure to have a receipt ready for import duty and even if you only paid $100 for that used gauge on eBay you can should expect an additional $100 getting it into another country. (and significantly more for some shiny new gauge that doesn’t have any scratches or dents on it yet) Another reason to double up is that you can pack them into different suitcases. When the airline loses a bag – which happens more often than you’d expect – you still have at least one to deploy. Finally, if you install dual TRG’s from the start of your project, you then have the option of temporarily re-allocating to singles if a bad weather event destroys half your installations.

A low budget hack that you can maintain is better than an expensive commercial solution that you can’t. Avoid any system with special unobtainium batteries or connectors that you can’t buy/repair at your fieldwork destination. That sweet looking ultrasonic combo you were drooling over at AGU was probably engineered for the US agricultural market, and may not work at all in Costa Rica. If you do start testing acoustic or optical rain sensors, then have a second low tech backup on the station beside it. Most methods have some sort of ‘blind spot’ where they generate inaccurate data and the only way to spot that is to have data from a different device to compare. Reed switches also have the advantage that they require no power to operate.

A new gauge with a funnel full of standing water after only six months.
The debris snorkel plugged because it was designed for fine mid-west field dust, rather than the soggy leaf debris blowing around in a tropical storm. Pine needles cause similar headaches for researchers in northern climates.
Watch out for siphon mechanisms at the bottom of funnels designed to improve accuracy.
Anything that makes the flow path more convoluted will eventually clog – so I cut them out.

Location, Location, Location

Installation guidelines for weather stations usually make assumptions that only apply only in wealthy first world countries. This is hardly surprising given that even mid-range kit will set you back $1,000 and pro-level equipment can top $10,000 when you include the wireless transmitters & tripod mounting system. But our research almost never happens under such genteel conditions, so here’s my take on some of those serving suggestions:

This station has never been disturbed.
A brick stack used to raise the funnels above the roof edge walls. These are bound with construction adhesive and industrial zip ties. Rooftop stations are still affected by high winds and falling branches, but just as often the disturbance is from maintenance people working on the water tanks, etc.
  1. Place the weather station in an open area, free from obstructions such as trees or buildings, to ensure proper air flow and accurate wind measurements.
    So what do you do if those open areas only exist at all because someone cut down trees to build? And anemometer measurements are only possible if your kit can stand being hit by several tropical storms per year. Not to mention the amount of unwanted attention they draw. Wind data is one of the few things we rely on government & airport stations for.
  2. Choose a location with a stable and reliable power supply, or consider alternative power sources such as solar panels or batteries.
    The expectation of reliable electricity / internet / cell phone reception is as humorous to a field scientist as the expectation of getting a hot shower every day. For more giggles, why not pop over to the next geo-sci conference in your area and ask them how long their solar powered station in Michigan ran before it was riddled with buckshot. Batteries are your only option, and the system should be able to run at least twice as long as your expected servicing schedule because things never go according to plan.
  3. Locate the weather station in an area that is easily accessible for maintenance and repairs.
    Even in areas that regularly get pummeled by hurricanes, vandalism/theft is our biggest cause of data loss. Any equipment within reach of passers-by will be broken or missing within a couple of months – especially if it looks like a scientific instrument. So it’s worth a good hike through dense jungle to protect your data, even if that makes the station harder to access.
  4. Choose a location away from any artificial sources of heat, such as buildings or parking lots.
    Rooftops are the only locations where we’ve managed to maintain long term stations because they are persistent, hidden from view, and the surrounding trees have been cleared. And in an urban environment…isn’t that, you know, the environment? Yes the thermal data is off because those rooftops go well over 45°C, but temperature is the easiest data to get from tiny little loggers that are more easily hidden at ground level.
  5. Consult with local authorities and meteorological agencies to ensure that the location meets any necessary standards or regulations.
    A solid long-term relationship with the land owner, and your other local collaborators is vital for any research project, but don’t expect local authorities to make time for a friendly chat about your climate station. NGO’s are usually run by volunteers on shoe-string budgets so they’ll be grateful for any hard data you can provide. However, those same groups are often a thorn in the paw of the previously mentioned authorities. Threading that needle is even more complicated when some NGO’s are simply place-holders for large landowners. In addition to significant amounts of paperwork, public lands suffer from the problem that legislation & staff at the state/territory level can change dramatically between election cycles, sometimes to the point of banning research until the political wind starts blowing in a different direction.

Maintenance

The best maintenance advice is to have separate loggers dedicated to each sensor rather than accumulating your data on one ‘point of failure’ machine, especially when DIY loggers cost less than the sensors. We try to bring enough replacement parts that any site visit can turn into a full station rebuild if needed.

After six years in service I’m surprised this unit hasn’t been zapped by lightning.
Even with zip-tie bird spikes this gauge still accumulates significant poop each year. This passes through the main filter screen which stops only sticks, seeds & leaves. Chicken wire is another common solution to the bird roosting problem that’s easy to obtain locally.
Funnel & screen after the annual cleaning. This stainless steel kitchen sink strainer works far better than the commercial solutions we’ve tried because it has a large surface area that rises above most of the debris. It is installed at a slight angle and held in place by wads of plumbers epoxy putty. This has become a standard addition to ALL of our rain gauges.
You’d think name brand gauge makers would use stainless steel parts – and you’d be wrong. Sand & coat those internal screw terminals with grease, conformal, nail polish, or even clear acrylic spray paint if that’s all you can find locally. This also applies to pipe clamp screws which will rust within one year even if the band itself is stainless.

Like bird spikes and debris snorkels, there are several commercial solutions for calibrating your gauge, but my usual procedure is to poke a tiny pin-hole in a plastic milk jug or coke bottle, and adding 1 litre of water from a graduated cylinder. Placing this on the funnel of a rain gauge gives a slow drip that generally takes about 30 minutes to feed through. The slower you run that annual calibration test the better, and ideally you want an average from 3-5 runs. Of the many gauges we’ve picked up over the years, I have yet to find even new ones that aren’t under-reporting by 5-10% and it’s not unusual for an old gauge to under-report by 20-25%, relative to its original rating. Leveling your installation is always critical, but this can be difficult with pole mounted gauges. In those cases you must do your calibration after the TRG is affixed. I rarely move the adjustment stops on a gauge that’s been in place for a couple of years even if the count is off, because that’s less of a problem to deal with than accidentally shearing those old bolts with a wrench.

The Data

Rain gauges have large nonlinear underestimation errors that usually decrease with gauge resolution and increase with rainfall rate – especially the kind of quick cloud-burst events you see in the tropics. Working back from the maximum ranges, you’ll note that few manufacturers spec accuracy above two tips per second. So that’s a reasonable ‘rule of thumb’ limit for small gauges with plastic tippers that will plateau long before larger gauges with heavier metal tipping mechanisms. Gauge size is always a tradeoff between undercounting foggy drizzles at the low end (where smaller tippers are better) or undercounting high volume events (where larger gauges generally perform better). Even if you hit the sweet spot for your local climate, storms can be so variable that a perfectly sized & maintained gauge still won’t give you data with less than 15% error for reasons that have little to do with the gauge itself.

This adds 5-10 ms hardware de-bounce to the reed switch. Most gauges have switch closure times of under 100ms with 1-2ms of bounce either side. After the FALLING trigger, sleep for ~120msec before re-enabling the interrupt. You can eliminate the 5k puller using the 25k internal pullup @ D3, but your rise time changes from 10ms to 25ms and the resulting divider only drops to 15% Vcc.

All that’s to say your analysis should never depend on rainfall the way you might rely on temperature or barometric data. More records, from more locations, always gives you a better understanding of site conditions than ‘accurate’ data from a single location. Of course, that gives you the “man with two watches” problem when one of the gauges is in the process of failing. The most difficult situation to parse is where something slowly plugs one of the funnels but both gauges are still delivering plausible data. A signature of this kind of fail is that one gauge of a pair starts delivering consistent tip rates per hour during events while the other gauge shows larger variation. An alarm bell should go off in your head whenever you see flattened areas on a graph of environmental data:

Wasps & termites are particularly fond of rain gauges because they naturally seek shelter underneath them – where the drain holes are.
Daily Rainfall (mm) record from the gold funnel TRG at the top of this post showing before (green) & after (red) the storm that clogged the filter. Failure is indicated by prolonged curving descents followed by a long tail of low counts as the trapped water slowly seeps through the blockage. Normal rainfall data looks spikey because it can vary dramatically in as little as 15 minutes with long strings of zeros after each rain event.
Did I mention snakes? Yep, they love our climate stations. My guess is they go in after residual water left in the tipper mechanism.

These problems are much easier to sort out if both of the gauges at a given station are calibrated to the same amount of rainfall per tip (usually 0.01inches or 0.2mm) and disappear entirely if you have three records to compare.

While I’ve been critical of the cheap plastic tippers you find in home weather station kits they still have a place for budget EDU labs, and I’ve more than a few in my back garden feeding data into prototypes for code development. A new crop of metal & plastic hybrid gauges have started appearing on Amazon/eBay for about $150. The build quality seems a bit dubious, but we are going to give them a try this year anyway to see if they can serve as backups to the backups. As they say in the army: “Quantity has a quality all it’s own”. I wonder if any citizen science projects out there could adopt that motto?

Addendum: 2023-04-17

As luck would have it, that cheep Chinese gauge arrived from Amazon the day after I made this post. I wasn’t expecting much from a $150 rain gauge, but this one turned out be such an odd duck that I’ll include it here as a warning to others. On the right you see a photo from the listing, which made me think both the body and the funnel were made from brushed metal. What actually arrived made it clear the whole listing was carefully crafted to hide some pretty serious designs flaws.

Another dented delivery though, to be fair, the metal is tissue paper thin. At least this one didn’t get stolen from the front porch. Aluminum spray paint was used to disguise the crappy plastic funnel in the listing photos.
You could snap any part of this mechanism with finger pressure. And I wouldn’t take bets on how waterproof that junction box is either. There were no photos of this mechanism in the listing, which should have stopped me right there.

The thing that makes this such a good example of bad engineering is that they first optimized production cost with cheap brittle plastic that will likely fail with a year. As a result, the tipper ended up so light that they had to add a second funnel & tipping mechanism to deal with the momentum of drops falling from the main funnel. That second mechanism is so small it’s guaranteed to plug up with the slightest amount of debris – causing the unit to stop working even before the plastic starts to crack. If they had simply added that extra material to a larger, heavier bottom tipper the upper mechanism wouldn’t have been necessary.

What the heck?

What takes this from merely bad to actually funny was the inclusion of an “Intelligent rainfall monitoring system for data upload via Ethernet, GPRS and RS485”. I presume that was intended to connect with ‘industry standard’ meteorological stations but who’d tack a cheap sensor like this onto one of those $1000+ loggers? Even stranger to me is the idea you’d waste that much power on a simple reed switch. Fortunately there is a terminal block where you can bypass that all that baggage, though that’s also fragile to the point of single-use.

In the current political environment, the last thing I’d do is put something like this on my ethernet.

Bottom line is that you are better off buying a used unit from a quality manufacturer than you are getting a new one from a company that doesn’t have a clue what they are doing. For comparison, here’s how the mechanisms inside decent gauges look:

While the tipper inside a Texas/Onset gauge is made of plastic it is extremely tough. The needle point pivots are hardened and we’ve yet to see one fail. The magnets however do rust, but like the reed switch they are easily replaced with a bit of CA glue. Magnets have fallen from tippers on gauges from several different companies because differential expansion in tropical heat cracks the epoxy.
This High Sierra was built like a Russian tank and you see similarly rugged components inside older gauges from brands like Vaisala. After retiring the gauge itself, we repurpose these indestructible innards for drip monitoring projects inside caves.

Waterproofing your Electronics Project

Arielle Ginsberg examines the sponges covering a flow sensor deployed in a coastal outflow canyon.

How to waterproof electronics? Basically it’s a combination of cleaning, coating, encapsulation & housings. We’ve been deploying our loggers under water since 2013 and although I posted many detailed build tutorials along the way, it’s time to gather some of that distributed material into a summary of the techniques we use to make our loggers more reliable. This post will focus on options available to someone working with a modest budget and also include a few interesting methods we haven’t tried yet for reference. To put all this in context; we deploy our DIY loggers to typical sport diving depths and usually get solid multi-year operation from our underwater units.

The major sections of this post are:
Sealants , Encapsulation , Housings & Connectors , Other Protection Methods


Sealants

No matter what coating you use, everything must be scrupulously clean before it’s applied. Corrosion inducing flux is hydroscopic and there’s always some left hiding underneath those SMD parts – especially on cheap eBay modules. That means scrubbing those boards with alcohol and an old toothbrush, drying them with hot air & cotton swabs, and then handling by the edges afterward. Boards with only solid-state parts (like the ProMini) can be cleaned using an ultrasonic cleaner and 90% isopropyl but NEVER subject MEMS sensors or RTC chips to those vibrations. Polymer based RH sensors like the BME280, or MS5803 pressure sensors with those delicate gel-caps, also get careful treatment. After cleaning, let components to dry overnight in a warm place before you coat them with conformal. I clean new modules as soon as they arrive, and store them in sealed containers with desiccant.

This $25 jewelry cleaner gets warm during the 5 -10min it takes to get the worst parts clean so I run this outside to avoid the vapours.

MG Chemicals 422-B Silicone Modified Conformal Coating is the one we’ve used most over the years. Even with a clean board, adhesion to raised ICs can be tricky as surface tension pulls it away from sharp edges. Like most conformals, 422-B fluoresces under UV-A so a hand-held blacklight lets you check if it’s thin at some corner, or if you simply missed a spot. The RC/Drone crowd regularly report on many of the other options on the market like Corrosion-X, Neverwet, KotKing, etc. I’ve never seen a head-to-head test of how well the different conformals stand up over time, but the loggers we’ve retired after 5-6 years in service look pretty clean even though silicone coatings are not water vapour proof. I like the flow characteristics of 422 for our small scale application, though the vapours are nasty enough to make you wonder how much brain damage your project is really worth. You can also just burn the stuff off with a soldering iron if you need to go back for quick modification after its been applied. Conformals can be made from other compounds like acrylic or urethane, and at the top of the market you have vacuum-deposited coatings like Parylene.

Nail polish gets mentioned frequently in the forums and it’s usually a type of nitrocellulose lacquer. While it’s non-conductive and non-corrosive, acetate chemistry is not far off acetone which solvates a lot of stuff. So nail polish may soften some plastics and/or the varnish protecting your PCBs. It might also wipe the lettering off some boards. So the trick is to start with the thinnest layer possible and let that harden completely before applying further coats. Nail polish softens somewhat when heated above 200°C with a hot air gun enabling you to scrape it away if you need to rework something after covering. Overall it’s a good low-budget option that’s less complicated to apply than a UV cured solder mask solution.

One of our many early failures before we decided to use only transparent epoxies. The outer surface of this epoxy was intact; giving no hint of what was happening below.
Some epoxies permit slow water vapour migration leading to corrosion at points with leftover flux. Like the white example above, this potting was still OK at the surface. Both of these two failures pre-date our use of conformal on everything.

You never get 100% coverage so the areas underneath components usually remain unprotected. But coatings really shine as a second line of defence that keeps your logger going when the primary housing suffers minor condensation or makes the unit recoverable after a battery leak. Even when we intend to pot a circuit completely, I still give it a thin coat of conformal to protect it during the week long burn-in test before encapsulation. (If you are using cheap sensors from eBay, expect ~20% infant mortality) Be careful not to let coatings wick onto metal contacts like those inside an SD card module or USB connector and remember to seal the cut edges of that PCB so water can’t creep between the layers.

The delicacy of application required when working with IC sensors means that spray-on coatings are usually a bad idea, but there are exceptions. Paul over at Hackaday reports success using clear acrylic spray paint as a kind of poor man’s Parylene after “comparing the MSDS sheets for ‘real’ acrylic conformal spray coatings, and acrylic paint. All that’s missing is the UV indicator, and the price tag.” He uses this technique in outdoor electrical boxes but the first thing that comes to my mind is coating the screw terminals inside most rain gauges (see photo at end of post), and the exposed bus-bars you see in some climate stations.


Potting / Encapsulation

Hot glue is a quick way to seal one side of pass-through so you can pour liquid epoxy on the other.

Hot-melt Glue: Glue sticks come in a variety of different compounds. But it’s hard to know what’s in the stuff at your local hardware store so my rule of thumb is to just buy the one with a higher melting point. If you are gluing to something with a high thermal mass or a surface that can transfer heat (like copper PC board) the glue will freeze before it bonds. So preheating the item you are working on with a hot air gun before gluing is usually a good idea. I’ve used glue sticks for rough prototypes more times than I can remember, sometimes getting several months out of them before failure in outdoor locations. Cheaper no-name sticks tend to absorb a lot of water(?) and have more trouble sticking to PCB surface coatings. So it’s a temporary solution at best unless you combine it with something more resistant like heat shrink tubing. Add glue to what you’re sleeving, and it will melt and flow when you shrink – effectively a DIY adhesive lined heatshrink:

Here I used leather gloves to squeeze the hot-melt glue inside adhesive lined heat-shrink until it covered the circuit without bubbles. This one lasted ~8 months and then we switched to epoxy fills.

Hot glue is also quite handy for internal stand-offs or just holding parts together if they are too irregularly shaped for double-sided mounting tape to do the job. Isopropyl alcohol helps remove the glue if you need to start over.

Superglue & Baking Soda: These dollar-store items are perfect for sealing & repairing the polymer materials that most waterproof kit is made from. Adam Savage has a great demo of this on YouTube. That gusseting build-up technique is so fast it now accomplishes many of the things I used to do with hot glue. CA glue & spray-on accelerant can also be used to improve the strength of 3D prints, as demonstrated by the ever-mirthful Robert Murray-Smith. The sealed surface of your print can then be written on with a sharpie marker without the black ink bleeding into the PLA layers, although I also use clear mat-finish nail polish for this labeling.

At this scale the viscosity of your encapsulating material is as important as any vapours it might give off. To avoid wicking problems, a ring of ‘dry’ plumbers putty can secure a filter cap over the sensor after the liquid potting compound sets.

Silicone Rubber comes in two basic types: ‘Acid cure’ which smells like vinegar and ‘Neutral cure’ which gives off alcohol while it hardens (often used in fish-tank sealants). Never use acid curing silicone on your projects. Hackaday highlighted a method using Tegderm patches to give silicone encapsulations a professional appearance although you can usually smooth things well enough with a finger dipped in dish detergent. In another Hackaday post on the subject, a commenter recommends avoiding tin-cured RTV silicones in favor of platinum cured which has longer lifespan and less shrinkage. Really thick silicone can take several days to cure but accelerants like corn-starch or reptile calcium powder can cut that to a few hours. It’s also worth knowing that silicones expand/contract significantly with temperature because this can mess with builds using pressure or strain sensors.

The $5 3440 Plano Box housings we use on the classroom loggers stand up to the elements well enough in summer months, but rarely have an adequate seal for the temperature swings in fall or winter. Judging by this post over at AVRfreaks, this is a common issue with most of the premade IP68 rated housings on Ebay/Amazon.

While silicone is waterproof enough for the duration of a dive it is NOT water-vapor proof. I often use GE Silicone II (or kafuter K-705) to seal around the M12 cable glands we use on student projects. However, water vapor eventually gets in when the housings “cool down & suck in moist air” causing condensation on the upper surface. Any container sealed with SR will eventually have an internal relative humidity comparable to the outside air unless your desiccants prevent that from happening. Always use desiccants with color indicator beads so you can see when they need to be replaced. Old desiccant pouches can be ‘recharged’ overnight in a food dehydrator and used ones can usually be found for ~$10 at your local thrift shop. Dehydrators are also great for reviving old filament if you have a 3d printer.

Liquid Epoxy: If money is no object, then there are industrial options like Scotchcast but many come in packaging that dispenses volumes far too large for a small batch of loggers. The best solution we could find at the start of this project was Loctite’s line of 50mL 2-part epoxies designed for a hand-operated applicator gun. Used guns can be found on eBay and there are plenty of bulk suppliers for the 21-baffle mixing nozzles at 50¢each or less. Loctite E-30CL has performed well over years of salt-water exposure on our PVC housings though it does fog & yellow significantly after about six months. Check the expiry date before buying any epoxy because they harden inside the tube when they get old. I’ve often received epoxies from Amazon that are only a month or two from expiring, so don’t buy too much at one time. And they don’t last long once you crack the seal, so I usually line up several builds to use the entire tube in one session.

A background layer of black EA E-60NC potting compound was used to improve the visual contrast. Once that set a clear acrylic disk was locked into place over the OLED with E-30CL epoxy – taking care to avoid bubbles. The acrylic does not yellow like the epoxy and can be thick enough to protect relatively delicate screens from pressures at depth.

My favorite use of liquid epoxy combines it with heat shrink tubing to make long strings of waterproof sensors:

A short piece of adhesive lined heat shrink seals one end of the clear tube to the cable. Epoxy is added to fill about 1/3 the volume. Then gentle heating shrinks the clear tube from the bottom up until the epoxy just reaches the top. Another adhesive lined ring seals the epoxy at the top of the tube. Then gentle heating of the clear heatshrink contracts it into a smooth cylinder. Extra rings are added to strengthen the ends.

We’ve deployed up to 24 DS18b20 sensors on a single logger running underwater for years – failing eventually when the wires broke inside intact cable jackets because of the bending they received over several deployments. This mounting takes a bit of practice, so have a roll of paper towels nearby before you start pouring and I usually do this over a large garbage can to catch any accidental overflow.

This image shows the typical appearance of E30CL after several months in seawater. The brown dot is a marine organism that bored into the epoxy, but they have never tried to drill through the housing itself… which says something about the toxicity of polyvinyl chloride.

The 2-Part fiberglass resins used for boat repair are another good potting option though they are often opaque with unusual coloration. Low viscosity mixes can be applied with precision using disposable syringes. It’s important that you transfer the stirred resin into a second container before pulling it into the syringe because there’s often a poorly mixed layer stuck to the sides of the first mixing cup. 3D printed shells are often used as casting molds but if all you just need is a rectangular shape then I’d use a LEGO frame lined with plastic food wrap. You can make single-use molds that conform to unusual shaped objects with sheets of modeling clay. When encapsulating large volumes you can make that expensive epoxy go farther with ‘micro-balloon’ fillers made from tiny phenolic or glass spheres. I’ve used old desiccant beads for this many times. Other inert fillers like talc power are sometimes used the lower peak temps during the curing process because fast setting epoxies get quite hot – sometimes too hot to touch. And speaking of heat, all encapsulation methods open the possibility that high power components could cook themselves. So avoid covering any heat sinks when you pot your boards.

Filler / Paste Epoxies: J-B weld is good low-budget option for exposed sensor boards. This two part urethane adhesive bonds well to most plastic surfaces and the filler it carries gives a working consistency somewhere between peanut-butter and thick honey. This is helpful in situations where you want to mount something onto a relatively flat surface like the falcon tubes we use with our 2-part Mini Loggers:

This BMP280 module already has a coating of conformal.
Shift the epoxy to the edges of the sensor with a toothpick

Although the original grey formulation gets it’s color from metal filings it is an electrical insulator. The older style JB weld that comes in two separate tubes is slightly thicker than that sold with an applicator syringe. It’s also worth noting that the stuff really needs at least 24 hours to set – not the 6 hours they claim on the package. There is also a clear version that can be used to protect light sensors, but I’ve yet to field test that in harsh enough conditions to see how it ages:

JB can also be used to secure delicate solder connections.
PTFE tape is a good diffuser if light levels get to high.
Unlike E30CL, clear JB-weld retains all those tiny bubbles.
A JB-weld coated DS18b20 after 6 months in the ocean. Specks of iron-particle rust can be seen, but when I broke away the coating the can underneath was still clean & shiny.

Wax: I haven’t tried this yet but it sounds like it could be fun: Refined paraffin can be purchased in food grade blocks for sealing jars, etc. at most grocery stores and it flows well into small component gaps. It’s also removeable, however the 45°C melting point which makes this possible is too low for outside deployments where I’ve seen loggers reach 65°C under tropical sun. A tougher machinable-wax can be made at home by mixing LDPE (plastic grocery bags) or HDPE (food containers) into an old deep fryer full of paraffin wax. The general recipe is a 4:1 ratio of paraffin to LDPE/HDPE and this raises the melting point enough to withstand summertime heat. Or you could try Carnauba wax which has a melting point above 80°C. You probably want to do partial pours with any wax based approach as shrinkage can be significant. If I had to make something even more heat resistant I’d consider an asphalt-based roofing cement. That’s a one-way trip, but it should last quite a while outside.

If you’re spending company money, it’s worth noting that many professional potting compounds like those from 3M are sold in hot-melt glue stick formats [usually 5/8″(16mm) diameter rather than the more common hobby market 1/2″]. This dramatically reduces waste & mess compared to working with liquid epoxies. Of course, it’s unlikely a DIYer will be able to use them as the applicators alone can set you back $300 to $600 USD. Another factor to consider is the different expansion rates of the circuit you are trying to protect vs the compound you are using for the encapsulation: hard epoxies may cause electrical failures by subjecting components to more stress when the environment is cycled between extreme temperatures. In those cases it is probably better to use softer compounds.


Housings & Connectors

Although 3D printers are now affordable, we still use plumbing for our underwater housings so that others can replicate them with parts from their local hardware store. The design has changed significantly over time but this tutorial video from 2017 still stands as the best overall description of the ‘potting wells’ method we use to mount sensors on those PVC housings. It also shows how to make robust underwater connectors using PEX swivel adapters:

Smooth surfaces on the inside of those wells are scored with a wire brush or rough grit sandpaper before pouring the epoxy. After solvent welding, leave the shells to set overnight before adding epoxy because bad things happen when you mix chemistries. In fact, that’s a good rule for all of things listed in this post. Otherwise that expensive potting compound could turn into a useless rubbery mess. Another important thing to note is that we break the incoming wires with a solder joint that gets encapsulated before the housing penetration. This is more reliable than cable glands because water can’t wick along the wires if the jacket gets compromised. The shell shown in that video uses a Fernco Qwik-Cap as the bottom half of the housing and quite a few Qwik-cap housings have survived years under water although the flexing of that soft polymer limits them to shallower deployments. So these wide-body units get used primarily for drip loggers & surface climate stations. It’s worth noting that water vapour slowly migrates through the plastic knockout cap on the upper surface of our drip counters. So they require fresh desiccants once a year even though the logger could run much longer than that. A reminder that over the time scales needed for environmental monitoring, many materials one thinks of as ‘waterproof’ are not necessarily vapour proof.

For underwater deployments we developed a more compact screw-terminal build that would fit vertically into a 2″ cylindrical body. After many struggles with salt water corrosion we gave up on ‘marine grade’ stainless steel and started using nylon bolts to compress the O-ring. But these need to be tightened aggressively as nylon expands in salt water (we usually pre-soak the bolts overnight in a glass of water before sealing). Nylon expansion has also caused problems with the thick 250lb ties we use to anchor the loggers. In a high humidity environments, cheap nylon zip ties become brittle and break, while expensive industrial ties stretch and become loose. We’re still looking for better options but when you are working under water, you need something that can be deployed quickly.

We’ve tried many different epoxy / mounting combinations on the upper cap of those housings, but with the exception of display screens we stopped using the larger wells for underwater units because the wide flat disk of epoxy flexes too much under pressure. This torsion killed several sensor ICs on deployments below 10m even though the structure remained water-tight.

As our codebase (and my soldering skills) improved we were able to run with fewer batteries – so the loggers became progressively smaller over time. Current housings are made from only two Formufit table leg caps and ~5cm of tubing. The same swivel adapter used in our underwater connector now joins sensor dongles to the housing via threaded plugs. Sensor combinations can be changed easily via the Deans micro connectors we’ve used since the beginning of the project. Though the photo shows two stacked o’rings, we now use shorter bolts and only one. See this post for more details on the construction of this housing.

EPDM O-rings lose much of their elasticity after a couple of years compressed at 20-25m, so for deeper deployments I’d suggest using a more resilient compound. And there are now pre-made metal housing options in the ROV space that didn’t exist at the start of this project. With the dramatic size reduction in recent models, you occasionally find a good deal on older Delrin dive-light housings on eBay. Another interesting option is household water filter housings made from clear acrylic. They were too bulky for our diving installations, but this Sensor Network project at UC Berkeley illustrates their use as surface drifters.


Other Protection Methods

Mineral oil: PC nerds have been overclocking in tanks of mineral oil for ages, so it’s safe at micro-controller voltages. It’s also used inside ROV’s with a flexible diaphragm to compensate for changes in volume under pressure. Usually a short length of Tygon tubing gets filled with oil and stuck out into the water, or the tube can be filled with water and penetrates into the oil-filled housing. We use a similar idea to protect our pressure sensors from salt water:

The MS5803 pressure sensor is epoxied into a 1/2″-3/4″ male PEX adapter and a nitrile finger cot is inserted into the stem of a matching swivel adapter.
The sensor side gets filled to the brim with mineral oil
The two pieces are brought together
Then tighten the compression nut and use a lubricated cotton swab to gently check that the membrane can move freely.

Moving those membrane-protected sensors onto a remote dongle makes it much easier to recover the sensor after a unit gets encrusted with critters. Oil mounts have worked so well protecting those delicate MS58 gel-caps that I’ve now started using this method with regular barometric sensors like the BMP280. This adds thermal lag but there’s no induced offset in the pressure readings provided there’s enough slack in the membrane. Silicone oil is another option, and I’ve been wondering about adding dye so that it’s easier to spot when those membranes eventually fail. I avoid immersing any components with paper elements, like some old electrolytic capacitors, or parts that have holes for venting.

Bio-fouling on one of our loggers deployed in an estuary river. We only got three months of data before the sensor was occluded.
We remove calcareous accretions by letting the housings sit for a few hours in a bucket of dilute muriatic acid. Many of our loggers get this treatment every season.

Cable Protection: For the most part this comes down to either strain relief, or repairing cuts in the cable jacket. Air curing rubbers like Sugru are fantastic for shoring up worn cables where they emerge from a housing though I usually use plumbers epoxy putty for that because I always have it on hand for the housing construction. Sugru is far less effective at repairing cables than something that’s cheaper but less well known: self-fusing rubber electrical sealing tape (often called ‘mastic’ or ‘splicing’ tape). This stuff costs about $5 a roll and has no adhesive: when you wind it around something it sticks to itself so aggressively that it can not be unstuck afterward, yet remains flexible in all directions. This makes it perfect for repairs in the middle of a cable and we’ve seen it last months under water though it quickly becomes brittle under direct sun. And it does the job in places you can’t reach with adhesive lined heat shrink. I usually slap a coat of plasti-dip or liquid electrical tape over top of those repairs. This improves the edge seal and makes the patch look better. Self-fusing tape is also great for bulking out cables that are too thin for an existing cable gland, or combining several wires into a water-tight round-profile bundle for a single gland.

However the best advice I can give is to simply avoid the temptation of soft silicone jacket cables in the first place. Yes, they handle like a dream under water, but you will pay for it in the long run with accidental cuts and hidden wire breaks due to all that flexing. Another hidden gotcha is that silicone compresses at depth which brings the wires closer together – potentially increasing the capacitance of a long bus enough to interfere with sensor handshakes. Our go-to after many years at the game is harder polyurethane jacketed cables (like the ones Omega uses for their thermistors) It’s a pain in the arse to strip & solder, but you can pretty much drive a truck over it. And somehow that kind of thing always happens at least once during a field season.

Lost count of how many times ants/wasps have bunged up our rain gauges. And I should have coated those screws…

Double housings: Instead of sealing the housing to block out humidity, control the point where it condenses by surrounding an inner plastic housing with a second outer shell made of aluminum. Then let everything breathe naturally with the idea that condensation will happen first on the faster cooling aluminum, thereby protecting the inner components. I’ve heard of this being used for larger commercial monitoring stations but I’ve never been brave enough to try it myself. You want some kind of breathable fabric membrane over any vent holes to keep out dust (to IP6) and especially insects because if there’s a way into your housing they will find it and move in. Another simple but related trick is to fill any void spaces inside your housing with blocks of styrofoam: this minimizes the total volume of air exchanged when the temperature swings.

Addendum 2023-05-24: Testing Underwater Housings

People reading this post might also be interested in the DIY pressure chamber which we’ve been using to test our little falcon tube loggers. It’s made from a household water filter canister, with a total cost of about $70usd. The domestic water pressure range of 40-80psi overlaps nicely with sport diving depths. The 30mL tubes are stronger for single sensor builds, but the 50mL tube provide more space for our 2-Module classroom data logger. This model uses two mini breadboards for convenient sensor swaps.

Addendum 2023-07-16:

There’s an interesting article on 3-d printed underwater housings over at the Prusa Research blog. I’d use a coating of CA glue with spray on accelerant to seal those outer surfaces.

2-Part ProMini Logger that runs >1 year on a Coin Cell

This ‘two-part’ logger fits nicely inside a 50mL Falcon tube. With a bit of practice, soldering the Pro Mini & RTC together takes ~30 minutes. The 4K EEprom on the RTC board will hold 4096 1-byte RTC temperature readings (~ 40 days worth @ 15 min. intervals) and that’s easily extended with $1 memory chips or modules.

The EDU build we released in 2020 provides remarkable flexibility for courses in environmental monitoring. However an instructor still needs to invest about five days ordering parts, testing components, and preparing kits for a 15-20 seat course being run remotely. (only 1/2 that is needed for in-person courses where the students pin & test the parts themselves). While that’s not unusual for university-level lab based subjects it is something of a stretch for high school teachers. And thanks to COVID chip shortages, modules that were only 99¢ at the beginning of this project could now set you back $5 each. So with all that in mind, we’ve continued development of a ‘lite’ version of our logger with the lowest possible prep time. That new baby is now ready for release with data download & control managed through the IDE’s serial monitor window.

With just three core components as our starting point, the only hardware option was to remove the SD card. Groundwork for this change was already in place with our use of an EEprom to buffer data so that high-drain SD saves only occurred once per day. Getting rid of power hungry cards also opened up the possibility of running the entire unit from the coin cell on the RTC module. But a power budget that small will necessarily add complexity to the base code, which must minimize run-time even though EEproms are notoriously slow devices. And most garden-variety memory chips have a lower limit of 2.7v – so a nominal 3v CR2032 can only be allowed to fall about 250mv under load before we run into trouble. That voltage drop increases over time because the internal resistance of a coin cell is only 10 ohms when new, but approaches 100 ohms by end of life.

We pressure tested the centrifuge tubes: 50mL tubes can be deployed to 10m depth, 30mL tubes can go to 20m. And the loggers run fine under mineral oil for deeper deployments.

In addition, it’s not unusual to see a 50mv delta at the battery terminals for every 5°C change in ambient so a standard lithium coin cell will not power the logger below 0°C. But if theres one thing I’ve learned on this project it’s that datasheets only tell you so much about system behavior in the real world – especially with stuff constructed from cheap modules carrying half a dozen unspecified bits. So let’s just build one and see how it goes…

Modifying the RTC module:

Clipping the VCC leg (the 2nd leg in from that corner) forces the DS3231 to run from the coin cell full time.
Disconnect the modules indicator LED by removing its limit resistor.
Remove the 200ohm charging resistor & bridge VCC to the backup power line at the black end of diode.

Cutting the VCC leg depowers most of the logic inside the DS3231. However the chip will still consume an average of 3µA through VBat to keep the oscillator, temperature compensation & comparator logic working. RTC current can spike as high as 650µA every 64 seconds when new temperature readings occur. Bridging VCC to Vbat also means a 3.3v UART will push some sub-milliamp reverse currents through an older cell. But I’ve yet to have a single problem (or even any detectable warming) after many days with loggers connected during development. Despite dire manufacturer warnings that reverse currents beyond 1µA will heat manganese-dioxide/lithium cells until they explode, the ones I’ve used so far survive the abuse without issue.

Three mods to the RTC module: Running from the Vbat also disables 32KHz output so I usually clip that header pin. Watch out for the ‘-M’ variant of the DS3231. We’ve had several batches of those over the years where the temperature register was off by 5°C or more. Try to use ‘-N’ or ‘-SN’ chips if you can get them.

I’ve no doubt the UART connected time is shortening the batteries lifespan slightly, in fact Panasonic specifies: “the total charging amount of the battery during it’s usage period must be kept within 3% of the nominal capacity of the battery”, so it’s a good idea to remove the battery if you are spending an extended time with the units connected to the serial line to keep the total reverse current time to a minimum. But given our tight operational margin I don’t think we can afford to lose two hundred millivolts over a Schottky protection diode. A typical solution would address this by ORing the two supplies with an ideal diode circuit but that’s not a option here as ideals usually waste some 10-20 µA. On a practical level it’s easier to just to pop in a fresh battery before every long deployment. Drift on these DS3231 RTCs is usually a loss of ~4-5 seconds per month, but could be up to twice that for -M variants of the chip.

Modify the Pro Mini board:

90° header tails on the left side are clipped to avoid accidental contact with the I2C jumpers later. Vcc & Gnd points left long. Load ‘Blink’ to test if the ProMini is working as soon as the header pins are on the board.
Carefully clip away the regulator from the 2-leg side. Also remove the power LED limit resistor.
Optional: Add the regulators orphaned 4.7µF input cap to the rail by bridging it to VCC.

An 8Mhz Pro Mini continues as the heart of our loggers because the 328p is still the easiest low-power option for projects that aren’t computationally demanding. These eBay Pro Mini’s usually sleep below 1µA with the BOD turned off but 17µA with BOD on. It’s worth noting there are clones out there with fake Atmel chips that won’t go below 150µA sleep no matter what you do. Cheaper boards usually ship with ceramic regulator caps (instead of tantalums) but that just makes them more resilient if you accidentally connect power the wrong way. At 8Mhz the ‘official’ lowest safe voltage for the 328p is 2.7v, so that’s where the default BOD is usually set. But I sleep with BOD off because I’ve noticed that if the BOD gets triggered by low battery voltage then the processor goes into a high 1mA drain condition and this makes AA’s leak all over the inside of our normal 3-part loggers.

Three removals & one addition prep this Pro Mini clone for assembly. The reset switch is removed to make room for a NTC thermistor circuit. The logger can then only be restarted with a serial connection, but that’s on purpose.

Join the two components:

But you always have the default 2.7v BOD on while the processor is operating so you probably want to stop logging when the rail starts falling below ~2785mv. Also keep in mind that there is range on that brownout threshold, from min. 2.5v to max. 2.9v. So one 328p may be more tolerant to running at low voltages than another.

Resistor legs wrapped in heat shrink extend the A4/A5 I2C bus. These two wires must cross over each other to align with connections on the RTC.
Add a layer of double-sided foam tape to prevent contact between the two boards. Extend the VCC & GND headers with resistor legs. Then remove the tape backing.
Carefully thread the four I2C bus jumpers through the RTC modules pass-through port. Press the two boards together onto the double sided tape.
Solder the connections to the RTC module. Now you can see why I trimmed the three header pins on that one side.

NOTE: Don’t trim the VCC & GND wires if you are going to add a rail buffering cap – the leftover ‘tails’ make perfect connection points for that capacitor later. (see below for details)
Clip the (non-functional) 32kHz pin and add solder to the SQW header pin on the RTC module. Solder a resistor leg to interrupt input D2 on the Pro Mini.
Add heat shrink & join D2 to the RTC SQW alarm header.
Then heat shrink the entire stack with ~4.5cm of 25mm (1inch) diameter tubing & cut that away from the battery holder.
The 2-module stack usually draws ~1µA in powerDown, but with part variability some go up to 2µA. Cheap modules often have leftover flux residue which can cause current leaks. It’s worth the time scrub these boards with isopropyl alcohol before assembly to reach the lowest possible power consumption. I found no significant difference in sleep current between setting unused pins to INPUT_PULLUP or to OUTPUT_LOW.

This two module combination usually sleeps around 1µA and most of that is the RTC’s (IBATT) timekeeping current as the 328p should only draw ~150nA in powerdown mode [with BOD off]. If we assume four readings per hour at 5mA for 10msec, the battery life calculator at Oregon Embedded estimates a 220mAh battery will last more than 10 years…which is ridiculous. We know from the datasheet that 575µA temperature conversions bring the RTC average up to 3µA – which isn’t showing up on this direct measurement. And there’s the battery self discharge of 1-3% per year. Perhaps most important there’s the complex relationship between pulsed loads and CR2032 internal resistance, which means we’ll be lucky to get half the rated capacity before hitting brown-out at 2.7v A more realistic estimate would start with the assumption that the battery only delivers about 110mAh with our logger consuming whatever we measure + 3µA (RTC datasheet) + 0.3µA (coincell self-discharge). We can round that up to 5µA continuous, with four 5mA*10millisecond sensor readings per hour, and we still get an estimated lifespan of about two years. So our most significant limitation is the amount of EEprom memory rather than battery power.

The Code: [posted on Github]

The most important difference between a coin cell powered logger and our AA powered units is that the battery has a high probability of being depleted to the point of a BOD restart loop. (which causes rapid flashing of the D13 LED) So we use a multi-step serial exchange in Setup() to prevent data already present in the EEprom from being accidentally overwritten.

Addendum 2023-10-25: Code revisions are *currently underway* to support the use of these loggers in enviro-sci course curriculum & some elements of the old logger code may disappear from Github for a while until our students have progressed further through the lab sequence. Those elements will then be restored to the base code on Github. This may cause discrepancies between the text in this post and the options in that code.

In Setup()

A UART connection is required at start-up so those menu-driven responses can occur through the serial monitor in the IDE. These have timeouts to avoid running the CPU during unintentional restarts. The menu sequence can be re-entered at any time simply by closing & re-opening the serial monitor window:

If you see random characters in the download window you have the baud rate set incorrectly. (We have recently increased this to 500,000 baud in the github code) Reset the baud and the menu should display properly HOWEVER you then need to close & re-open the window (this restarts the promini with serial window at the correct baud). If you try to Ctrl-A copy out the data when the the window still has garbled characters at the top then only the bad start characters that will copy out.

The first menu option asks if you want to download the contents of the logger memory to the serial monitor window. This can take 1-2 minutes with large EEproms at 500000 baud, which is the fastest rate an 8MHz ProMini can reliably sustain. Then copy/paste (Ctrl-A & Ctrl-V) everything from the IDE window into an Excel sheet and then, below the data tab, select Text to Columns to separate the data at commas. Or you can paste into a text editor and save as a .txt file for import to other programs. While that process is clunky because the IDE’s interface doesn’t export, everyone already has the required cable and data retrieval is driven by the logger itself. ( And yes, the exchange could be done with any other serial terminal app.)

After the start menu sequence the first sample time is written to the internal EEprom so the timestamp for sensor readings can be reconstructed during data retrieval later by adding offsets added to the first reading time. This technique saves a significant amount of our limited EEprom memory and all it takes is =(Unixtime/86400) + DATE(1970,1,1) to convert those Unix timestamps human readable in Excel. It is important that you download the old data before changing the sampling interval via the menu options because that interval (stored in eeprom) is used to reconstruct the timestamps during download. Valid intervals must divide evenly into 60 and be less than 60. Second-intervals can be used for rapid testing if you first enter (0) for the minutes during setup.

No sensor data is lost from the EEprom when you replace a dead coin cell and you can do the entire data retrieval process on UART alone with no battery in the logger. But RTC time reset should only be done after installing a new battery or the time will not be retained. If the time in the serial menu after a complete power loss reads 2165/165/165 165:165:85 instead of 2000/01/01 then there’s a good chance the RTC’s registers have been corrupted & the clock may need to be replaced. I’ve managed to do this to a few units by accidentally shorting the voltage to zero when the logger was running from a capacitor instead of a battery.

After setting the RTC time, the sampling interval, and other operating parameters, the logger requires the user to enter the ‘start’ command again. Only when that last ‘start’ confirmation is received are the EEprom(s) erased by pre-loading every location with ‘0’ values which also serve as End-Of-File markers during the next download. The red D13 led then blinks at 1second while the logger waits for the first sampling alarm to align with the current time before beginning the run.

Main LOOP()

To save power, slow functions like digitalWrite() and pinMode() can be replaced with much faster port commands. Careful attention is paid to pin states, peripheral shutdowns (power_all_disable(); saves ~0.3mA) and 15msec sleeps are used throughout for battery recovery. Waking the 328p from powerdown sleep takes 16,000 clock cycles (~2milliseconds @8MHz +60µS if BOD_OFF) and the ProMini draws ~250µA while waiting for the oscillator to stabilize. Care must be taken when using CLKPR to reduce system speed because the startup-time also gets multiplied by the divider.

( Note: For the following images a Current Ranger was used to convert µA to mV during a reading of the RTC’s temperature register at 1MHz. So 1mV on the oscilloscope means 1µA is being drawn from the Cr2032 )

Here CLKPR restores the CPU to 8MHz just before entering powerdown sleep, and then slows the processor to 1MHz after waking. The extra height of that first spike is due to the pullup resistor on SQW. Cutting the trace to that resistor and using an internal pull-up reduces wake current by 750µA.
Here the logger was left at 1MHz when it entered powerdown sleep(s). Waking now takes 16 milliseconds – wasting a significant amount of power through the 4k7 pullup on SQW when the RTC alarm is still asserted at the start of the event.

[ 2023-05-31 UPDATE: I came across several cheap eBay EEproms that would freeze the system when I lowered the system clock to 1MHz to save power during EEprom saves. So I have removed that technique from the code on GITHUB to make that codebase more generic. This problem did not affect the chips I bought from DigiKey]

CR2032 voltage is checked during the EEprom data save because an unloaded coin cell voltage does not change – even when the battery is nearly dead. This assures that the voltage droop is captured though the exact timing of that minimum varies from one memory chip to the next. Logger shutdown gets triggered when the EEprom write brings the rail voltage below the 2795mv systemShutdownVoltage, which happens when the internal resistance of the coincell rises at end of life.

Adding Sensors:

An RTC temperature only configuration for this logger records a 0.25°C temperature record from the DS3231, index-encoded to use only one byte per reading. This allows ~4000 readings to be stored in the 4k EEprom on the RTC module. This works out to a little more than 40 days at a 15 minute sampling interval, but you can set SampleIntervalMinutes to any even divisor of 60.

We made extensive use these RTC temp records in our cave drip loggers at the beginning of the project. The accuracy spec is pretty bad at ±3°C, but they were usually within ±1 @ 25°C. Note that the RTC only updates its temp. output registers once every 64 seconds, making it a fairly slow sensor.

That little AT24c32 doesn’t last very long with sensors that generate 2 or 4 byte integers. The solution is to combine them with larger 32k (AT24c256), or 64k (AT24c512) chips so the sensors arrive with the extra storage space they require. These EEprom modules can usually be found on eBay for ~$1 each and (after you change the bus address & bytesofstorage defs) they work with the same page-write & addressing code as the 4k EEprom.

The headers on this common BMP280 module align with the 32k headers in a ‘back-to-back’ configuration. The tails on the YL-90 breakout extend far enough to connect the two boards. Note this sensor module has no regulator which is better for low power operation.
Pin alignment between the YL-90 and this BH1750 module is slightly more complicated as you must keep the light sensor facing out. BMP280 sensors usually run about two years, but the BME280 variant (includes RH) has a shorter lifespan and should be replaced yearly.
After removing the EEprom’s redundant pullups, clip away the plastic spacers around the header pins. Then wiggle the BH1750 over the headers on the 32k module. Solder the points where the pins pass through the 1750 board. I2C pullups on the sensor boards can left in place.
I2C pin arrangement on the RTC doesn’t match the BH1750 module. Make the required cross-overs with a 4 wire F-F Dupont. (which comes with those red 32k boards) or soldering those connections is more robust.

NOTE: Support for both of the sensors shown above is included in the code on Github to serve as examples to guide other I2C sensor additions. Sensors are enabled by uncommenting the relevant defines at the start of the program and the base code also supports the ICU based NTC/LDR combination shown below.

I2C generally expects the pullups to be about 1ma which on a 3v system would require 3300 ohms. This means you can leave the 10k pullups on the sensor boards to bring total pullup (4k7 on the RTC & 50k on the ProMini = ~4k3) closer to the 3.3k ideal. The open-drain implementation of I2C means that capacitance on the bus will slow down the rising edges of your clock and data lines, which might require you to run the bus more slowly. The more sensors you add, and the longer the wires, the worse the parasitic capacitance gets. So if you need long I2C wires drop the total bus pullup to 2k2.

The 662k LDO regulator on most eBay sensors wastes 3-4µA: For long deployments this can be removed & then bridging the in->out pads should bring your sleep back to ~1µA. Technically, the regulator is below spec if your supply falls below ~3.4v

You must use low power sensors with a supply range from 3.6v down to our 2.7v EEprom/BOD cutoff. A good sensor to pair with this logger should sleep around 1µA and take readings below 1mA. Sometimes you can pin-power sensors that lack low current sleep modes although if you do that be sure to check for unexpected current leaks in other places such as bus pullup resistors and the I2C bus may go into an illegal state (the idle condition is with both lines high) requiring a full reset of all sensors after power is restored. Choose libraries which allow non-blocking reads so you can sleep the ProMini while the sensor is gathering data and check that those libraries do not contain any delay() statements. In that regard my favorite sensor combination for this logger is an NTC thermistor & CDS cell which adds nothing to the sleep current. We explained how to read resistive sensors with Arduino’s Input Capture Unit in some detail back in 2019; so here I will simply show the hardware connections. Add these passives to the Pro Mini BEFORE joining it to the RTC module, taking care not to hold the iron so long that you cook the components:

D6=10kΩ 1% reference resistor , D7=10k NTC, D8=300Ω, D9=LDR (5528). Note that the LDR could be replaced with any other type of resistive sensor. A typical 10kNTC reaches ~65kΩ near -10°C and a 10kLDR usually peaks near 55kΩ at night.
You MUST put the lines you are not reading into input mode to isolate them from the circuit when you read a specific sensor. It’s easy to kill the LDR with too much heat – in that case it becomes infinite resistance.
A 104 ceramic to GND completes the ICU timing circuit. With 0.1uF as the charge reservoir, each resistor reading takes ~1-2msec in sleep mode IDLE. With these sensors I jumper D2->SQW with a longer piece of flexible wire to avoid covering the D13 LED.

Don’t expect the NTC you have in your hands to match exactly the values provided by its manufacturer. Fortunately there are several online calculators to choose from when determining your NTC thermistor constants. For calibration data, I use a food dehydrator to heat the loggers to around 45-50°C, then then it cool to room temp. for the midpoint, and then put them in the refrigerator overnight for a cold point at ~5°C. I add this NTC/LDR combination to all loggers (even if they will eventually drive I2C sensors) because a good test-run for newly built units is to read these sensors at an ultra short five second interval until the EEprom is full. After passing that test you can be sure the core of the logger is ok before adding new sensors.

Other useful modifications to the basic 2-module logger include:

A 220µF 10-25v Tantalum buffer cap can be added where the rail wires pass through the RTC module. Anything from 200 to 470µF will do the job of reducing older battery droop. After matching polarity, flip it over to bring the SMD solder pads to the top surface for easier soldering. Rail buffering caps assist the coincell, extending runtime by ~30%
The code on GitHub has a #define which enables A0=GND, A1=Green, A2=Blue. Lighting the LED(s) by setting A1/A2 pins to INPUT_PULLUP keeps the current below 50µA with the internal pullup resistors. The red led on D13 is left in place to show when you are trapped in a BOD-restart loop. LED’s also make good frequency specific light sensors..
With a few strategic bends, single I2C sensors can be soldered directly to the pins.

EEprom & sensor additions push measured sleep currents to 2µA (so ~6µA actual w RTC’s 3.0µA) but that still gives a >1 year estimates on 110mAh. With all due respect to Ganssle et al, the debate about whether buffering caps should be used to extend operating time is something of a McGuffin because leakage currents are less important when you only have enough storage space for one year of operation. Even a whopper 6.3v 1000µF tantalum only increases sleep current by about 1µA. That’s 1µA*24h*365days or about ~ 8.76 mAh/year in trade for keeping the system above our 2.7v cutoff with barely a ripple. That means we don’t need to lower the BOD with fuses & custom bootloaders that break code portability. Pushing the limits of fuse optimization can get a little flaky on these cheap boards, so it’s good to have those ‘Get out of jail free‘ defaults available at reboot. When you only service your loggers once a year, any tweaks that require you to remember ‘special procedures’ in the field are things you’ll probably regret. (And many of those cheap eeproms on eBay also have a 2.7v lower limit) Using the 328p internal oscillator to save power is also a non-starter because it’s 10% error borks your UART to the point you can’t upload code.

With a practiced hand you can do a memory expansion right on the RTC module without changing the sleep current: Here I’ve replaced the default 4k AT24c32 with a 64k AT24c512. 64k is the sweet spot for single sensors generating a 2-byte integer value as you can store ~340 days of data with a 15minute interval. The RTC modules default configuration pulls address pins high (=0x57) with a 4k7 resistor block, while the red YL-90 modules pull the address pins low (=0x50). So you retain the option of adding another eeprom on the I2C header pins after this upgrade. It is also possible to solder new chips onto those little red EEprom breakouts. Note: 128k AT24c1024 chips are slightly larger than the 32&64k so you have to bend the legs straight down which makes that soldering tricky. So I usually find it easier to just ‘stack’ two smaller 64k chips.
Here’s an example of stacking the EEproms. The rtc module pulls all address pins high (setting the lower default 4k eeprom in this picture to 0x57) but if you leave any address pins on the 64k chips ‘unconnected’ they get internally pulled to ground. (setting the bus address to 0x51 for the upper chip in this picture) AT24C512’s cost about 50¢ on eBay. The Write Protect pin can also be left unconnected. Each chip you add increases overall sleep current by ~1µA.

Then use both eeproms by changing defines at the start of the code:
define opdEEpromI2Caddr 0x57
define opdEEbytesOfStorage 4096
define sensorEEpromI2Caddr 0x51
define sensorEEbytesOfStorage 65536

Leakage scales linearly with capacitance so use the Falstad simulator to see what size you actually need. Capacitors rated 10x higher than the applied voltage reduce leakage current by a factor of 50. So your rail buffering caps should be rated between 10 to 30v if you can find them. While they are a bit bulky, electrolytics also work fine. The 220µF/25v 227E caps I tested only add ~5nA to the loggers sleep current and these can be obtained on eBay for <50¢ each. High voltage ratings get you closer to the low leakage values you’d see with Polypropylene, Polystyrene or Teflon film and moves you farther away from de-rating problems.

Note that as the buffering cap gets larger, you need to add more ‘recovery time’ before the rail voltage is restored after each load. A large rail capacitor also protects the unit from impacts which might briefly disconnect the spring contact under the coin cell. This is a such common problem in our other loggers that we use a drop of hot glue to lock the RTC coin-cell in place before deployment.

Discussion:

CLKPR brings the ProMini down to 1MHz and a current of ~1.3 mA however the energy cost per logging event actually increases when the system clock gets divided. But with our slim operating margin the growing internal resistance of the coin cell means we have to stay above 2.775v even if it means using less efficient code. Running from the internal oscillator might help but is avoided because our ICU timing method needs the thermal stability of an external oscillator and, the internal oscillator makes UART coms flakey. FRAM has much lower saving currents than standard EEproms but those expensive chips sleep around 30µA so they aren’t a viable option for low-power systems. (…unless you pin-power them so you can cut their power during sleep.)

In the next three images, a Current Ranger converts every 1µA drawn by the logger to 1mV for display on the scope. The last two spikes are transfers of buffer-array data into the 4K EEprom on the RTC module while the CPU takes ADC readings of the rail voltage. Note that our code staggers EEprom save events so they don’t occur in the same pass like this, but I forced them together for this testing to illustrate the effect of repeated pulse-loads:

A triple event with a temperature sensor reading followed the transfer of two array buffers to EEprom. Battery current with no rail buffering cap. [Vertical scale: 500µA /division, Horizontal: 25ms/div]
Here a 220µF tantalum capacitor was used to reduce the peak battery currents from 2.5mA to 1.5mA for that same event.
Here a 1000µF tantalum [108J] capacitor reduces the peak battery current to 1mA. The 30msec sleep recovery times used here are not quite long enough for the larger capacitor.
Voltage across a coin cell that’s been running for two months with NO buffering capacitor. The trace shows the 2.5mA loads causing a 60mv drop; implying the cell has ~24 ohms internal resistance. [Vertical Scale: 20mv/div, Horizontal: 25ms/div]

The basic RTC-only sensor configuration reached a very brief battery current peak of ~2.7mA with no buffering cap, 1.5mA with 220µF and less than 1mA with 1000µF. The amount of voltage drop these currents create depend on the coin cells internal resistance but a typical unbuffered unit usually sees 15-30mV drops when the battery is new and this grows to ~200mV on old coin cells pulled from loggers that have been in service since 2018. The actual drop also depends on time, with subsequent current spikes having more effect than the first as the internal reserve gets depleted. The following images display the voltage droop on a very old coin cell pulled from a logger that’s been in service since 2016 (@3µA average RTC backup)

This very old coin cell experiences a larger 250mv droop with no capacitor buffer. Note how the initial short spike at wakeup does not last long enough to cause the expected drop. [Vertical: 50mv/div, Horizontal: 25ms/div]
Adding a 220µF/25v tantalum capacitor cuts that in half but triples the recovery time. CR2032‘s usually plateau at 3.0v for most of their operating life, so the drop starts from there.
[Vertical: 50mv/div, Horizontal: now 50ms/div]
A 1000µF/6.3v tantalum added to that same machine limits droop to only 60mv. Recharging the capacitor after the save now approaches 200 milliseconds. [Vertical : 50mv/div, Horizontal: 50ms/div]

After many tests like those above, our optimal solution is to run the processor at 8MHz most of the time while breaking up the execution time with multiple 15 millisecond POWER_DOWN sleeps before the CR2032 voltage has time to fall very far. (This is especially necessary if you start doing a lot of long integer calculations) This has the added benefit that successive sensor readings start from similar initial voltages. The processor is brought down to 1MHz only during the EEprom save event where the block can not be divided (and that only happens when the data buffering arrays are full….)

Current drawn in short bursts of 8MHz operation during sensor readings. The final EEprom save peaks at ~2.75mA draw with CLKPR 1MHz CPU & Sleep_ADC readings.
[CH2: H.scale: 25msec/div, V.scale 500µA/div via Current Ranger]
Voltage droop on that same ‘old’ CR2032 used above reached a maximum of 175mv with NO buffering capacitor across the rail. This battery has about 64 ohms of internal resistance.
[CH2: V.scale 25mv/div, H.scale 25ms]
Adding a 220µF tantalum capacitor to the rail holds that old battery to only 50mv droop. The 25v tantalum cap adds only 0.1µA leakage to the overall sleep current.
[CH2: V.scale 25mv/div, H.scale 25ms]
This ‘solder-free’ AT24c256 DIP-8 carrier module is bulky compared to the red YL-90 boards, but it lets you easily upgrade to 64k AT24C512 & configure multiple I2C address. Here I’ve removed the redundant power led & pullup resistors.

Even with fierce memory limitations we only use the 328’s internal 1k for a couple of index variables that get written while still tethered to the UART for power. EEprom.put starts blocking the CPU from the second 3.3msec / byte, and internal EEprom writing adds an additional 8mA to the ProMini’s normal 5mA operating current. This exceeds the recommended pulse current for a garden variety CR2032. And multi-byte page writes aren’t possible so data saved into the 328p costs far more power than storage in an external EEprom. However it is worth noting that reading from the internal EEprom takes the same four ticks as an external with no current penalty, while PROGMEM takes three and RAM takes two clock cycles. So it doesn’t really matter to your runtime power budget where you put constants or even large lookup tables.

Most DIP8 EEproms are pin compatible with that carrier. 128k EEproms are usually divided into 64k blocks with sequential I2C addresses so the location variables don’t exceed uint16_t max of 65535. Heliosoph posted a way to combine multiple 64k EEproms into a single linear address range but with ‘combination’ sensors like the BME280 sometimes it’s easier to just send each sensor’s output to a different bank using the two bus addresses. Our code demonstrates how to do this with the OPD & sensor arrays.

A simple optimization we haven’t done with the code posted on GitHub is to increase the I2C buffer. All AT-series EEproms are capable of 32-byte page-writes but the default wire library limits you to only 30 bytes per exchange because you lose two for the register location. So we used 16-byte buffer arrays in the starter code but you could increase those array/transfers to 32 bytes by increasing the wire library buffers to 34 bytes:

In wire.h (@ \Arduino\hardware\arduino\avr\libraries\Wire\src)
#define BUFFER_LENGTH 34
AND in twi.h (@ \Arduino\hardware\arduino\avr\libraries\Wire\src\utility)
#define TWI_BUFFER_LENGTH 34

With larger EEproms you could raise those buffers to 66 bytes for 64 data-byte transfers. That buffer gets replicated in five places so the wire library would then require an extra 138 bytes of ram over the 32-byte default. 128k EEproms often refresh entire 128-byte blocks no matter how many bytes are sent, so increasing the buffer reduces wear considerably for those chips, while 64k & below may perform partial page-writes more gracefully. It’s also worth mentioning that there are alternate I2C libraries out there (like the one from DSS) that don’t suffer from the default library’s limitations.

An average sleep current of ~5µA*86400 sec/d burns ~432 milliAmpseconds/day. With a page-write that draws 4mA*6msec, the usual 12x buffer-array transfers of data to EEprom per day will consume about 288mAs. Cutting that in half by doubling the size of the array is going to save you ~144mAs per day, so it will take four days to save enough power for one more day of operation. That return is better with older 10 millisecond/write EEproms, and reducing the number of pulse-load events may extend battery life in ways other than the power being saved. I always do the 34-byte I2C buffer & 32-byte data array increases for long deployment loggers because the cheap EEproms on eBay are always older revs, so they take twice as long for both read & write operations compared to newer AT series chips from DigiKey.

No matter what optimizations you make, battery life in the real world can also be shortened by temperature cycling, corrosion from moisture ingress, being chewed on by an angry dog, etc. And you still want the occasional high drain event to knock the passivation layer off the battery.

Here wires extend connections for the thermistor & LED to locations on the surface of the housing. Alternate power is brought in from a small solar panel – but I will post more on that little experiment later 🙂

An important topic for a later post is data compression. Squashing low-rez sensor readings into only one byte (like we do with the RTC temperature & battery voltage) is easy; especially if you can subtract a fixed offset from the data first. But doing that trick with high resolution thermistor or lux readings is more of a challenge. Do you use ‘Frame of Reference’ deltas, or XOR’d mini-floats? We can’t afford much of an energy tradeoff for heavy calculations on our little 328p so I’m still looking for an elegant solution.

Hopefully this new member of the Cave Pearl family goes some way toward answering people asking why we haven’t moved to a custom PCB: Using off-the-shelf parts is critical to helping other instructors base their courses on our work, and when you can build a logger in about 15 minutes, from the cheapest parts on eBay – that still runs for a year on the coin cell… why bother? We do water sampling dives all the time with those 50mL centrifuge tubes and I’ve yet to see the Nunc’s from Thermo leak at depths far beyond IP68. And again, you are only talking about $1 each for those tubes.

We’ve also been having fun embedding these ‘ProMini-llennium Falcons’ into rain gauges and other equipment that predates the digital era. There’s a ton of old field kit like that collecting dust in the corner these days that’s still functional, but lacks any logging capability.

Addendum

30ml self-standing Caplugs from Evergreen Labware are another good housing option because they have a brace in the cap that just fits four 22gauge silicone jacket wires. Holes drilled through the lower stand enable zip-ties to secure the logger. The outer groove in the lid provides more surface area for JB-weld epoxy, giving you an inexpensive way to encapsulate external sensors. 1oz / 25ml is enough to cover about five of these sensor caps. Then clear JB weld can be used as a top-coat to protect optical sensors.

Drill the central channel to pass the I2C wires through the cap. Roughen the upper surfaces with sandpaper to give it some tooth for the epoxy.
Conformal coat the board before the epoxy. Work the epoxy over the sensor board carefully with a toothpick and wipe away the excess with a cotton swab.

If you deploying in an area exposed to direct sun you can prevent the logger from getting too hot by adding a layer of PTFE thread tape around the tube:

PTFE tape is also an excellent light diffuser to keep the logger cool or when reading the LDR
Heat shrink holds tape in place

Adding a small amount of silicone grease to the rim of the tube before closing improves the seal with the lid considerably. We’ve done pressure tests to 45psi so these tubes can be deployed to at least 20m depth. Avoid old stock as the caps get brittle & crack long before the clear tubes age. Use small 0.5-1 gram desiccant packs with this housing.

Addendum: 2022-07-01

Since we covered adding sensors, here’s a couple of burn-down curves for the two configurations described above. Both were saving 4 bytes of data per record every 30 minutes giving a runtime storage capacity of about 150 days. Battery was logged each time the 16-byte buffer-arrays were written to eeprom. Both loggers have a measured sleep current of ~1.5µA and they will be downloaded periodically. Although the curve spikes up after each download, these are runs on the same coin cell battery:

Cr2032 voltage after 11 months @30min sampling interval: BMP280 sensor reading Temp. & Pr. stored in 32k eeprom with NO 220µF rail buffering capacitor. This test run is complete. At oversampling x16 the BMP uses considerably more power than the BH1750.
Coin cell after more than 12 months @30min sampling interval: BH1750 sensor & 32k ‘red board’ EEprom (Sony brand battery: again, with no rail buffer cap). Both of these records show only the lowest battery reading in a given day. This logger is still operating…

I’m running these tests without help from a rail buffering cap, to see the ‘worst case’ lifespan. A pulse loaded Cr2032 has an internal resistance of ~20Ω for about 100 mAh of its operational life, so our 5mA eeprom writing event should only drop the rail 100mv with no rail buffer cap. But once the cell IR approaches 40Ω we will see drops reaching 200mv for those events. The CR2032’s shown above have plateaued near their nominal 3.0v, so we should see the rail droop to ~2800mv when the batteries age. Again, with the 220 µF rail capacitor those drops are reduced to less than 1/2 that size and with 1000µF they are virtually eliminated.

Note that the download process briefly restores the voltage because the 3.3v UART adapter drives a small reverse current through the cell. I think this removes some of the passivation layer, but the effect is short lived. I have reloaded these two loggers with a new code build that tracks both high (immediately after wake) & low (during EEwrite) battery levels to see if the delta in the logs matches the 50mv drops I usually see with a scope.

According to Maxell’s 1Meg-ohm (3.3µA continuous) discharge test, coin cells should stay at their 3v plateau until they deliver about 140mAh [~500,000 mAs] So buffering caps aren’t really needed until batteries pass that point. In testing, 200uF rail caps extended runtime by about 35%. Of course, if you reach a year without the rail buffer, then you’ve probably filled the EEprom. So that capacitor may only be necessary with high-drain sensors or in low temperature deployments where the battery will struggle with larger IR drops. According to Nordic Semi: “A short pulse of peak current, say 7mA (typical of a Bluetooth Low Energy radio) for 2 milliseconds followed by an idle period of 25ms is well within the limit of a CR2032 battery to get the best possible use of its capacity.” Our EEprom save events are typically around 6-8 mA for 5ms which causes <50mv drop with a 200µF. Even with very old batteries the typical EEsave event doesn’t usually drop the rail more than 150mv, however the recovery time grows from less than 25msec to more than 150msec, so logging events look more like ‘blocks’ on the oscilloscope trace rather than a series of short spikes.

And here we compare our typical logging events to the current draw during the RTC’s internal temperature conversion (with a 220µF/25v cap buffering the rail). On all three the horizontal division is 50 milliseconds, and vertical is 200µA via translation with a current ranger:

Typical sampling event peaks at 450µA with a 220µF rail buffer cap. The logger sleeps for 15msec battery recovery after every sensor reading or I2C exchange.
Every 64 seconds a DS3231 temperature conversion draws about 250-300µA for 170ms. There is no way to influence the timing of the RTC conversions.
Occasionally the RTC temp conversion happens in the middle of a logging event adding to the peak current.

The datasheet spec for the DS3231 temp conversion is 125-200ms at up to 600µA, but the units I tested drew half that at 3.3v. The rail cap can’t protect the coin cell from these long duration events so RTC temp conversions overlapping the EEprom save will likely be the trigger for most low voltage shutdowns. The best we can do to avoid these collisions is to check the DS3231 Status Register (0Fh) BSY bit2 and delay the save till the register clears. But even with that check, sooner or later, a temp conversion will start in the middle of an EEprom save.

Another thing to watch out for is that with sleep currents in the 1-2µA range, it takes a minute to run down the little 4.7µF cap on the ProMini board. If you have a 220µF buffering the rail the logger can sleep for more than 10 minutes with no battery. So if you are trying to reset the RTC you may need to briefly short Vcc to GND (at the UART headers) after removing the coin cell. Note that on several of the RTC modules the alarms continue to be asserted even after you disable them in the control register and this draws 6-700uA continuously through the pullup on the module. The only way to be absolutely sure the RTC alarm(s) will not fire after a shut-down is to turn off the RTC’s main oscillator. It’s also worth noting that the datasheet says: “If low power consumption during reset is important, it is recommended to use an external pull-up or pull-down.” If your code hangs, the processor will draw 5mA continuously until the battery drains and the logger goes into a BOD restart loop with the D13 red led flashing quickly. The logger will stay in that BOD loop from 4-12 hours until the battery falls below 2.7v without recovering. This has happened many times in development with no damage to the logger or to any data in the EEprom.

In all cases, your first suspect when you see weird behavior out of the logger is that the coin cell needs to be replaced. It’s worth noting that name brand CR2032s (Panasonic, MuRata, Duracell, Energizer, etc.) can last significantly longer than no-name ‘bulk’ coin cells from eBay/Amazon. They also plateau at 3.05v, while cheaper cells tend to level out at 2.95v. Most of the units I’ve tested trigger their BOD just below 2.775 volts. And 10 to 20 millivolts before the BOD triggers the internal voltage ref goes a bit squirrely, reporting higher voltages than actual if you are using the 1.1vref trick to read the rail. The spring contact in the RTC module can be pretty bad and that can give you random quits from large voltage drops so I usually slide a piece of heat-shrink behind it to strengthen contact with the flat surface of the coin cell. Normal operation will see 40-50mv drops during EEprom saves up to 3msec with 200µF rail buffers ( with a 1000µF rail cap those events are under 20mv) If those events look unusually large or recovery starts stretching to 100’s of milliseconds on the scope you probably have bad battery contact. In all cases, a long duration load can deplete the rail buffering cap – a 200µF reaches the same v-drop as a ‘naked’ battery after ~3msec, and 1000µF after ~10msec.

Addendum: 2023-04-23

We finally released the full build tutorial on YouTube – including how to upgrade the default 4k EEprom with two stacked 64k chips:

…and for those who already have soldering skills, we posted a RAPID 4 Minute review at 8x playback

Make at least two machines at a time. I usually build in batches of six, and of those, one usually ends up with either a faulty RTC module or a ProMini with one of those fake 328p chips that wont sleep below 150µA. Test each ProMini with ‘blink’ before assembly because you occasionally get one that shipped without a bootloader. Having more than one logger makes it easy to identify when you’ve got a hardware problem rather than an error in your code. Even then, no unit is worth more than an hour of troubleshooting when you can build another one from scratch in about 30 minutes – your time is worth far more than these components. That said, taking time to clean all the parts before & after assembly is always worth your time, because with sleep currents below 5µA any leakage paths between PCB traces from flux residue, fingerprint smudges, etc. become important.

Also Note: 99¢ eBay sensor modules are cheap for a reason and it’s not unusual for us to see 25% of them rejected for strange behavior or infant mortality during week long burn-in tests. Relative accuracy spec for the BMP280 is supposed to be ±0.12 millibar, but when I run a batch of them side-by-side I usually see ±4 millibar between the records. So huddle test each batch to normalize them and be sure to look at each graph individually so you don’t include any bad data in your normalization which could throw off ALL of the corrections. Cheap BME280s sometimes refuse to operate with it’s individual RT/T/Pr sensors set at different oversampling levels, and at the highest x16 setting that sensor may use more power than your budget can sustain over a long deployment. Another thing to be aware of is that real-world installation means exposure to condensing conditions. For sensors with a metal cover (like the BMP280) internal condensation will happen at the dew point – often killing the sensor.

This $10 si7051 temp sensor module has ±0.1°C accuracy and sleeps in the nano amp range. You are more likely to find sensor modules with low power requirements on Tindie, than you are on eBay/Amazon. Be careful about boards with regulators, as their quiescent draw can be much larger than the sensors sleep current.

And all the other quid-pro-quos about dodgy eBay vendors still apply: Split your part orders over multiple suppliers with different quantities, ordered on different days, so you can isolate the source of a bad shipment. Don’t be surprised if that batch of boards somehow turns into a random delivery of baby shoes to your doorstep. Amazon is often cheaper than eBay and AliExpress is 1/4 the price of both.

Addendum: 2023-06-05

16x accelerated tests averaged about 1250 hours run time.

Ran a series of Cr2032 battery tests with these little loggers and was pleasantly surprised to find that even with the default BOD limiting us to the upper plateau of those lithium cells; we can still expect about two years of run time from most name brand batteries (with a 200uF rail cap). And with a series of different resistors on the digital pins, this logger might be the cheapest way to simulate complex duty cycles for other devices. Also keep in mind that all the units in the battery test had BOD’s below 2.8v – about 1 in 50 of these Promini’s will have a high BOD at the maximum 2.9v value in the datasheet. It’s worth doing a quick burn test with the Hdao cells to spot these high cutoff units to exclude them from deployment.

Addendum: 2023-10-25

Macintosh users have been running into a very specific problem with this logger: their USB-c to USB-a adapter cables are smart devices with chips inside that will auto shut-down if you unplug them from the computer while they are connected to a battery powered logger. The VCC & GND header pins on the logger feed enough power/voltage back through the wires to make the chip in the dongle go into some kind of error state – after which it does not re-establish connection to the Mac properly until it is completely de-powered. So you must unplug your loggers at the UART module to logger connection and NOT by simply pulling the whole string of still-attached devices out of the USBc port.

Addendum: 2023-12-01

Released the classroom version of this 2-module logger, with substantial code improvements to make it easier to add new sensors. This new build has two breadboards supported on 3D printed rails so that sensor connections can easily be changed from one lab activity to the next. The default code reads temperature via the RTC, and NTC thermistor, Light via an LDR and the Bh1750, and Pressure via a Bmp280.

Addendum: 2024-04-20

We have a separate post describing how to calibrate the NTC thermistors on these two module loggers using a DIY warm water bath made from insulated lunch boxes:

References:

Heliosoph: Arduino powered by a capacitor
Nick Gammon: Power Saving techniques for microprocessors
Jack Ganssle: Hardware & Firmware Issues Using Ultra-Low Power MCUs
Using a $1 DS3231 Real-time Clock Module with Arduino
Waterproofing your Electronics Project
An Arduino-Based Platform for Monitoring Harsh Environments
Oregon Embedded Battery Life Calculator
WormFood’s AVR Baud Rate Calculator
ATmega328P Datasheet

Timing an LED light-sensor with Pin Change Interrupts

Individual sub-channels in an RGB LED are off center, and the chemistries have different overall sensitivity. So you see substantial offsets btw colors on  spatial distribution charts.   Image from: Detail of a RGB LED 2.jpg by Viferico

We’ve been using a reverse bias discharge technique to turn the indicator LEDs on our loggers into light (& temperature) sensors for several years. The starter code on GitHub demonstrates the basic method but as the efficiency of garden variety RGBs continues to improve, I’ve noticed that the new ‘super-bright’s also seem to photo-discharge more rapidly than older LEDs. Sometimes the red channel discharges so quickly that we hit the limit of that simple loop-counting method with our 8Mhz processors.

Normally when I want more precise timing, I use the Input Capture Unit (ICU) which takes a snapshot of the timer1 register the moment an event occurs. This is now our preferred way to read thermistors, but that means on most of our deployed loggers the ICU on D8 is already spoken for. And a multi-color LED offers interesting ratio-metric possibilities if you measure each channel separately. That prompted me to look into PIN CHANGE interrupts, and I’m happy to report that, with a few tweaks to suspend other interrupt sources, Pin Change & Timer1 can approach the limits of your system clock. So results are on par with the ICU, but Pin Change extends that ability to every I/O line. With slow sensors, where the counts are high, I usually put the system into sleep mode IDLE to save battery power while counting, but that adds another 6-8 clock cycles of jitter. Sleep modes like power_down are not used because the ~16,000 clock cycles that the processor waits for oscilator stabilization after deep sleeps makes accurate timing impossible.

Fig 5. Light Cones emitted by Clear and Diffuse LED Lenses from Olympus document Introduction to Light Emitting Diodes There is another good LED primer from Zeiss. For more: this paper does a deep dive into LED radiation patterns.

If you are new to interrupts then Nick Gammons interrupt page is definitely the place to start. (seriously, read that first, then come back & continue here…)  The thing that makes working with interrupts complicated is that microcontrollers are cobbled together from pre-existing chips, and then wires are routed inside the package to connect the various ‘functional parts’ to each other and to leads outside the black epoxy brick. Each ‘internal peripheral’ uses a memory register to control whether it is connected (1) or not (0) and several sub-systems are usually connected to the same physical wires. Each of those ‘control bits’ have names which are completely unrelated to the pin labels you see on the Arduino. So you end up with a confusing situation where a given I/O line is referenced with ‘named bits’ in the GPIO register, and other ‘named bits’ in the interrupt peripheral register, and yet more ‘named bits’ in the ADC register, etc.   Pin Maps try to make it clear what’s connected where but even with those in hand it always takes a couple of hours of noodling to get the details right.  I’m not going to delve into that or this post would scroll on forever, but there are good refs out there to Googlize.

Fast Reading of LED light sensors:

#include <avr/power.h>
#define
   RED_PIN   4             // my typical indicator LED connections
#define   GREEN_PIN   6
#define   BLUE_PIN   7
#define   LED_GROUND_PIN   5     //common cathode on D5
volatile unsigned long timer1overflowCount;


//  Reading the red channel as  a stand alone function:

uint32_t readRedPinDischarge_Timer1() {   

// discharge ALL channels by lighting them briefly before the reading
digitalWrite(LED_GROUND_PIN,LOW);  pinMode(LED_GROUND_PIN,OUTPUT);
pinMode(BLUE_PIN,INPUT_PULLUP);   pinMode(GREEN_PIN,INPUT_PULLUP);
pinMode(RED_PIN,INPUT_PULLUP);

    //execution time here also serves as the LED discharge time
    byte gndPin =(1 << LED_GROUND_PIN); 
    byte keep_ADCSRA=ADCSRA;ADCSRA=0;   byte keep_SPCR=SPCR;
    power_all_disable();   // stops All TIMERS, save power and reduce spurious interrupts
    bitSet(ACSR,ACD);      // disables the analog comparator

digitalWrite(BLUE_PIN, LOW);digitalWrite(GREEN_PIN, LOW);
digitalWrite(RED_PIN, LOW);   //end of the LED discharge stage

//reverse prolarity to charge the red channels internal capacitance:
pinMode(RED_PIN, OUTPUT); pinMode(LED_GROUND_PIN, INPUT_PULLUP);
_delay_us(24);  //alternative to delayMicroseconds() that does not need timer0

noInterrupts();
// enable pin change interrupts on the D5 ground line
bitSet(PCMSK2,PCINT21); // set Pin Change Mask Register to respond only to D5
bitSet(PCIFR,PCIF2);  // clears any outstanding Pin Change interrupts (from PortD)
bitSet(PCICR,PCIE2); // enable PinChange interrupts for portD ( D0 to D7 )

set_sleep_mode (SLEEP_MODE_IDLE);    // this mode leaves Timer1 running
timer1overflowCount = 0;                          // zero our T1 overflow counter

// reset & start timer1
TCCR1A = 0;    // Compare mode bits & wave generation bits set to zero (default)
TCCR1B = 0;    // Stop timer1 by setting Clock input Select bits to zero (default)
TCNT1 = 0;      // reset the Timer1 ‘count register’ to zero
bitSet(TIMSK1,TOIE1);   // enable Timer1 overflow Interrupt so we can count them
bitSet(TCCR1B,CS10);    // starts timer1 prescaler counting @ 8mHz (on 3v ProMini)
interrupts();

PIND = gndPin;    // faster equivalent of digitalWrite(LED_GROUND_PIN,LOW);

do{ 
sleep_cpu(); 
     }while ( PIND & gndPin );      //evaluates true as long as gndPin is HIGH

TCCR1B = 0;                           // STOPs timer1 (this redundant – but just making sure)
bitClear(TIMSK1,TOIE1);      // T1 Overflow Interrupt also disabled
sleep_disable();

bitClear (PCIFR, PCIE2);          // now disable the pin change interrupts (D0 to D7)
bitClear (PCMSK2,PCINT21); // reset the PC Mask Register so we no longer listen to D5
bitSet (PCIFR, PCIF2);              // clear any outstanding pin change interrupt flags

power_timer0_enable();        // re-enable the peripherals
power_twi_enable();
power_spi_enable();    SPCR=keep_SPCR;
power_adc_enable();   ADCSRA = keep_ADCSRA;
power_usart0_enable();

pinMode(RED_PIN,INPUT);
pinMode(LED_GROUND_PIN,OUTPUT);  // normal ‘ground’ pin function for indicator LED
return ((timer1overflowCount << 16) + TCNT1);
              //returning this as uint32_t, so max allowed is 4,294,967,295
}


// and the required ISR’s
ISR (TIMER1_OVF_vect)  {
timer1overflowCount++;
      if(timer1overflowCount>10000){         // this low light limiter must be <65534
         DDRD |= (_BV(LED_GROUND_PIN));    // sets our gnd/D5 pin to output (is already LOW)
                                                               // Bringing D5 low breaks out of the main do-while loop 
         TCCR1B = 0;  // STOPs timer1 //CS12-CS11-CS10 = 0-0-0 = clock source is removed
      }
}

ISR (PCINT2_vect)  {                                   // pin change interrupt vector (for D0 to D7)
    TCCR1B = 0;                                             // STOPs timer1
    DDRD |= (_BV(LED_GROUND_PIN));    // forces GND pin low to break out of the sleep loop
}


Key details: 

A 1k resistor was present on the LED’s common GND line for all these tests, but the limit resistor has no effect on the photo discharge time.

The code above tweaks our standard discharge method (on GitHub) with port commands & PIND when things need to happen as fast as possible, but also uses slower digitalWrite/pinMode commands in places where you want to spend more time ( in the pre-read channel discharge steps ).  The power register lowers current draw during SLEEP_MODE_IDLE, but power_all_disable(); also shuts down Timer0, so those pesky 1msec overflows don’t disturb the count. Waking from SLEEP_IDLE  adds a constant offset of about 8 clock cycles , but it reduces the jitter you’d normally see with the CPU running. One or two clock cycles of jitter is normally unavoidable with a running processor because you can’t respond to an interrupt flag in the middle of an instruction. Interrupts are also blocked when you are processing some other interrupt, so if the AVR is dealing with a timer0 overflow – the LED triggered pin change would have to wait in line.

This Timer1 method increases resolution by an order of magnitude (so you can measure higher light levels) but that lead me to the realization timing jitter is not the major source of error in this system. Before light even reaches the diode it is redirected by the LED’s reflective cavity and the encapsulating lens. Sampling time is also a factor during calibration because light levels can change instantaneously, so any temporal offsets between your reference and your LED reading will also add noise.

Does light sensing with LEDs really work?

One way to demonstrate the limits of a garden variety RGB is to cross-calibrate against the kind of LUX sensors already in common use. Most LED manufacturers don’t worry much about standardizing these ‘penny parts’, so {insert here} all the standard quid pro quos about the limitations of empirically derived constants.  I covered frequency shift in the index-sensor post, and there’s an obvious mismatch between the wide spectral range of a BH1750 (Lux sensor) and the sensitivity band of our LED’s red channel:

Spectra sensitivity of BH1750BH1750 datasheet: (Pg 3)
Fig.4.24, pg49, Approximated Emission and Sensitivity Spectra (of an OSRAM LH-W5AM RGB led)  from: Using an LED as a Sensor and Visible Light Communication Device in a Smart Illumination System

Most of us don’t have a benchtop source to play with so I’m going try this using sunlight.  The variability of natural light is challenging, and the only thing that lets me use that LED band as a proxy for LUX is that intensity from 400-700nm is relatively consistent at the earths surface.

The most difficult lighting conditions to work with are partially cloudy days with many transitions from shadow to full sun. Because the reference and LED sensors are in different physical locations within the housing shadows that cross the logger as the sun moves across the sky will darken one of the two sensors before the other if they are not aligned on the same north-to-south axis before your tests.

Skylight also undergoes a substantial redistribution of frequencies at sunrise/sunset and that may produce a separation between the response of the ‘yellow-green’ sensitive red LED channel, and the wider sensitivity range of the BH1750. 

The biggest challenge for a cross calibration is that LEDs don’t match the ‘Lambertian’ response of our reference. A bare silicon cell has a near perfect cosine response (as do all diffuse planar surfaces) producing a perfectly spherical pattern on polar intensity diagrams. The BH1750 comes very close to that, but LED’s have a range of different patterns because of their optics:

Directional Characteristics of the BH1750 from the BH1750 datasheet (Fig.5 Pg 3) This plot is in the style of the right hand side of the Broadcom diagram which shows both polar and linear equivalents.
Relative luminous intensity versus angular displacement. from: Broadcom Datasheet (Fig.10) for HLMP-Pxxx Series Subminiature LED Lamps

But those challenges are good things: most tutorial videos on youTube use ‘perfect datasets’ to illustrate concepts.  Data from real-world sensors is never that clean, in fact the biggest challenge for educators is finding systems that are ‘constrained enough’ that the experiment will work, but ‘messy enough’ that students develop some data-wrangling chops. Many beginners are unaware of the danger of trusting R-squared values without understanding the physical & temporal limitations of the system: (you may want to expand this video to full screen for better viewing)

A note about the graphs shown below:
I’m lucky to get one clear day per week my location, and the search for ‘the best’ LED light sensor will continue through the summer. I will update these plots with ‘cleaner’ runs as more data becomes available.

The metal reflecting cup around the diode is an unavoidable source of error in this system:

Reflectors cause convergence leading to complex dispersion angle plots (blue) when compared to a Lambertian cosign response (purple)

The curve will also be affected by the shape and volume of the encapsulation. Some LED suppliers provide photometric files in addition to FWHM plots for their LEDs. Of course at the hobbyists level just finding datasheets is challenging so it’s usually easier to just take some photos of the LED against a dark grey card.

IESviewer features a rendering tool that can be used to show the spread & intensity of light emitted using photometric files from the manufacturer.

I could not find any information for the cheap eBay parts I’m using, so I decided to start with a 5050 LED with very little lens material over the LED:

Both sensors are suspended on the inside of the logger housing with transparent Gorilla-brand mounting tape. Orange lines highlight areas where my deployment location suffers from unavoidable interference with the calibration, The light is reduced by passing through both the HDPE of housing lid & a glass window.

The 5050 response crosses the Lambertian curve several times but the pattern still broadly follows the reflector cup diagram: the LED response shows a noon-time ‘deficit’ relative to the brighter ‘shoulders’ at midmorning & midafternoon.

The logger was suspended in a south facing skylight window during these tests. Window frame shadow crossing events produce error spikes in opposite directions at ~6:30 am & pm, while wind-driven tree leaf shadows can produce errors in both directions from about 3:00 to 6:65 pm depending on whether the BH1750 or the LED is temporarily shaded. This was the least compromised location I could find in my urban environment.

Now lets look at a clear 5mm RGB led:

After omitting the shadow-cross events (orange circles), the 5mm clear LED has large % errors due to strong focusing of the lens when the sun is directly above the emitter. This LED would make a terrible ambient light sensor, but the curves are so well defined that with a little work it could be used to determine the angle of the sun as it progresses across the sky without any moving parts.

This non-diffused pattern is predicted by Figure 10 in the Broadcom datasheet, with the tight dispersion angle of lens producing a strong central ‘hot spot’. The overall pattern is inverted relative to the 5050 (which is primarily just the metal reflector cup) although the effect of the lens is much stronger. Adding small glass particles to the epoxy will diffuse the light, reducing the ‘focusing power’ of that lens:

5mm diffused round RGB vs BH1750 lux. Outside areas with external interference the %RE is ±12%

The diffused 5mm response could be seen as an ‘intermediate mix’ of the 5050 & CLEAR led response curves. We can modify the response by sanding the top of the LED flat:

5mm diffused LED with lens sanded off. Morning was overcast on this day till about 10am, with full sun after that. This eliminated the expected 7AM ‘shadow crossing’ error, however the change in lighting conditions also upset the symmetry of the overall response in terms of the trendline fit.

Removing the lens returns to a pattern similar to the 5050 – dominated by the effect of the metal reflector. So the key to making this calibration work will be finding a combination of lens & diffuser that brings the LED response closer to the BH1750:

10mm diffused LED vs BH1750 lux. The overall shape & %error range is similar to the 5mm diffused but the slopes are reduced because the lens is less aggressive & the diffusing epoxy is thicker.
10mm diffused LED covered with 2 thin sheets of PTFE over dome. The two layers of plumbers tape are applied perpendicular to each other and held in place with clear heat shrink.

PTFE tape is such a good diffusing material that it has disrupted the smooth refraction surface of the lens – essentially returning us to the 5050 pattern we saw with the physical removal of the lens from the 5mm led.

10 mm diffused LED with top sanded flat & two crossing layers of PTFE tape to provide a ‘diffusely reflecting’ surface -> one of the requirements for Lambert’s cosine law

Finally we have a combination where the errors no longer show a clearly defined structure, with noise randomly distributed around zero. We still have ±10% cloud-noise but that is related to the time delta between the reference readings and the LED reading – so data from the LED alone will be cleaner. This two step modification will turn a garden variety LED into a reasonable ambient light sensor and the PTFE tape is thin enough that the LED is still useable as a status indicator.

Why is the LED a power law sensor?

Power laws are common in nature, arising when a relationship is controlled by surface area to volume ratios. As near as I understand it; when absorbed photons generate electron-hole pairs in the diode, only those pairs generated in the depletion region, or very close to it, have a chance to contribute to the discharge current, because there is an electric field present to separate the two charge carriers. In a reverse biased p-n junction, the thickness of this depletion region is proportional to the square root of the bias voltage. So the volume of diode material that can ‘catch photons’ is proportional to the voltage we initially placed across the diode – but this voltage falls as each captured photon reduces the capacitive charge stored at ‘surfaces’ of the diode. So the active volume gets smaller but the surface area is left relatively unchanged. I’m sure the low level details are more complicated than that, and power law patterns arise in so many different systems that it might be something entirely different (?)

Enhance your Logger with an OLED & TTP223 Capacitive Touch Switch

A capacitive touch switch works through the lid of the housing. This lets you do things like check the battery status without disturbing your experiment on long runs.

I highlighted these cheap OLED screens as a useful addition in the 2020 build tutorial, but given that a typical deployment leaves the logger for long periods of time where nobody will see it, some have been asking if it’s worth sand-bagging the unit with a 20mA drain on every reading cycle. (ie: larger than the rest of the logger combined) So the today I want to explore another addition to the EDU build that makes screens viable without hurting the power budget. In sleep mode these screens only draw about ~20 μA. (even if you leave it’s redundant regulator in place) So the key is only triggering the pixels when there’s someone around to actually see it.

For text output l like the SSD1306Ascii library which is available through the library manager.  Grieman’s libraries are some of the best on offer whenever you need low power operation and a small memory footprint. With that installed you can drive the SSD1306 with a basic set of commands:

//  Compiler instructions at the start of your program: 
#include <SSD1306Ascii.h>                   // includes the main library itself
#include <SSD1306AsciiWire.h>          // use I2C peripheral inside the Arduino (optional)
SSD1306AsciiWire oled;                         // create a library object calledoled
#define   oled_I2C_Address   0x3C        // 0x3C or 0x3D depending on manufacturer

//  basic screen initialization in Setup{}  -this MUST be after wire.begin starts the I2C bus
oled.begin(&Adafruit128x64, oled_I2C_Address);
oled.setFont(System5x7);     // fonts specified in setup are included with the compile

// sending information the screen at end of the main Loop after the SD save
// you can also package these up into a stand-alone function (see below)

oled.clear();                          // erases anything displayed on the screen
oled.setCursor(0,2);           // ( 0-127 pixel columns , 0-7 text rows)
oled.set1X();                        // set single-row font height for labels
oled.print(F(“B280 T”));      // standard .print syntax supported
oled.setCursor(46,2);         // move cursor to column 46, but remain in same row
oled.set2X();                         // set double-row font height for readability
oled.print(bmp280_temp,2);    // a float variable, limited to two decimal places
oled.set1X();
oled.print(F(“o”));                    // a lower case ‘o’ for the degree symbol
oled.set2X();
oled.print(F(“C”));
 // … etc ... add more here until the display is ‘full’

// display pixels can be enabled at ANY time later . . .
oled.ssd1306WriteCmd(SSD1306_DISPLAYON);   // turn on the screen pixels
    delay(10000); // enough time to read the information 
oled.ssd1306WriteCmd(SSD1306_DISPLAYOFF);   // turn OFF the screen pixels

I’m only including the print statements for one line of the the display shown here ->  (click to enlarge)  but hopefully you see the pattern well enough to lather-rinse-repeat. The pixels do not need to be ‘turned on’ while you load the screen memory, and that data is persistent as long as the screen has power.  So if you wanted to tackle more advanced graphic output, you could build plots ‘one line at a time’ in the eeprom without needing a 127 field array to buffer that data.

The TTP223 Touch Switch:

Here I disabled the LED by removing the limit resistor, set the mode to momentary low, and trimmed the header pins so the upper surface is flat. The unmarked solder pads on the upper right are were you would add trimming caps to reduce sensitivity. These switches self calibrate for 0.5 sec at startup, so they need a bit of ‘settling time’ at power-on.

These small capacitive switches can be had for less than ~20¢ each on eBay. The power LED wastes about 8mA, so that needs to be disabled for logger applications. Then you set the operating modes by bridging the A / B solder pads. These switches are so sensitive that moving a finger within 2cm of the surface will trigger them. In applications where the sensor is exposed you probably need to add a 10-20 pF ‘trimming cap’ to prevent self triggering. Noise on your rail can also set them off, and you don’t want loose wires inside the housing near that sensing pad. TTP223 chips do not go rail to rail, but they still work ok driving MOSFETS.

A small square of double sided tape holds the T233 inside the student build just above the batteries. (The outer most rows of the breadboards are power rails)

In our case the sensor will be under 1.5mm of HDPE and another millimeter of double sided foam tape. So the default sensitivity level is  almost perfect for sensing through the lid of the  housing. With the LED disabled the TTP223 module pulls about 5μA when NOT being triggered, and 100μA when actively signaling the logger. Here I’ve connected the switch using 10cm pre-made jumpers but you want to be careful that you don’t put the switch in a position on the lid where it might cast a shadow over any light sensors as the sun moves through the sky. With our most recent student logger, you also need to shift the indicator led over to R4-GND5-Gr6-Bl7 so that the switch output (orange wire) can be fed into the hardware interrupt on D3. This chip produces ‘clean’ transitions so you don’t need to worry about de-bouncing. These switches seem to work fine without pullup, but I’ve been enabling the internal pull on D3 anyway.

Sensors such as rain gauges generate on/off interrupt signals at any time due to environmental conditions. But no matter how many times that happens, only the RTC’s sampling interval alarm should control the ‘regular’ read cycles in the main loop. To handle multiple interrupt sources we ‘trap’ the processor in a Do-While loop that checks flag variables (set in the associated ISR functions) to see where the wake-up signal originated.

In the code below; if the RTC flag variable is false when the processor reaches the while(flag status check) at the end of the loop, then the program gets sent back to the initial do{  statement. This code assumes you have already programmed the screens memory with data to display:

//  when you are about to put the logger to sleep  (after setting the RTC alarm)

pinMode( 0, INPUT_PULLUP );        // I always use pin D2 for the RTC alarm signal
rtc_d2_INT0_Flag = false;                 // D2 interrupt flag – this can only be set true in the ISR

pinMode ( 1 , INPUT_PULLUP );            // Cap. switch output is connected to D3
d3_INT1_Flag =
false;      // setting false here prevents any display until TTP223 is pressed

EIFR=EIFR;           // clears old ‘EMI noise triggers’ on BOTH hardware interrupt lines

do {                       // The ‘nesting order’ here is critical:

attachInterrupt (1,switchPressed_ISR, FALLING);
            attachInterrupt(0, rtc_d2_ISR_function, LOW);
                         LowPower.powerDown(SLEEP_FOREVER, ADC_OFF, BOD_ON);
            detachInterrupt(0);                    // MUST detach the high priority D2 first
detachInterrupt (1);                               // then detach the lower priority D3 interrupt

// Here we are simply powering the OLED display pixels but any other code put here
// is isolated from the main sequence, so you could trigger ‘special’ sensor readings, etc. 

  if (d3_INT1_Flag == true){
        oled.ssd1306WriteCmd(SSD1306_DISPLAYON);   // turn on the screen pixels
             attachInterrupt(0, rtc_d2_ISR_function, LOW);
               LowPower.powerDown(SLEEP_8S, ADC_OFF, BOD_ON);  // long enough to read
             detachInterrupt(0);
        oled.ssd1306WriteCmd(SSD1306_DISPLAYOFF);   // turn OFF the screen pixels
        d3_INT1_Flag = false;   // reset D3 interrupt flag
        }

} while (rtc_d2_INT0_Flag == false);    // if RTC flag has not changed repeat the ‘trap’ loop 

if (rtc.checkIfAlarm(1)) { rtc.turnOffAlarm(1); }   // processor awake so disable the RTC alarm
EIFR=EIFR;    // clears leftover trigger-flags from BOTH D2 & D3

// now return to the start of the main loop & capture the next round of sensor readings


// and each attached interrupt requires an ISR function   (outside the main loop!)
switchPressed_ISR() {
if (d3_INT1_Flag==false)
{ d3_INT1_Flag = true; }     // Flag variables set in an ISR must be global & ‘volatile’
}

rtc_d2_ISR_function() {
if (rtc_d2_INT0_Flag==false)
{ rtc_d2_INT0_Flag= true; }
}

BOTH the TTP on D3 and the RTC alarm on D2 can wake the logger from sleep – but switchPressed_ISR() does not change the critical flag variable so a wakeup caused by the D3 will not break out of the Do-While loop. Only the rtc_d2_ISR_function() can set the tested flag to true and thus escape from the trap.  This general method that works equally well with inputs from reed switch sensors for wind, rain, etc. With a few tweaks the same basic idea can also be used to capture ‘opportunistic’ sensor readings  – provided you save those to a different logfile, or tag the ‘extra’ records with something that’s easily sorted from the regular records later in Excel.

Note that the two interrupt sources use different triggers: FALLING for the TTP223, and LOW for the RTC. It doesn’t really affect anything if the switch transition is missed (you can just tap it again), but the whole logger operation is affected if the RTC alarm fails. The RTC must also be able to interrupt the ‘display time’. Fortunately the RTC’s alarm output is ‘latched’ so it will cascade the wake-ups until we exit the do-while loop. The most important thing to know when using HIGH or LOW as your interrupt state is that you MUST detach the interrupt IMMEDIATELY upon waking. If you forget to detach a HIGH or LOW triggered ISR it will just keep on firing until it fills all of the variable memory with stack pointers and crashes your logger.

In addition to preventing the display from using power when nobody is around, having the ability to trigger the display ‘at any time’ is a great way to make sure the logger is OK without opening it – especially if all you want to see is the most recent timestamp & battery level.  This is also helpful when students are running labs that can’t be ‘physically disturbed’ ( for example, soil sensors tend to produce significant discontinuities if they get bumped in the middle of a run)  Nothing is worse than letting an experiment run for a week, only to find that the system froze up a few hours after it was started.

Handling multiple screens of information

Once you’ve got the basic screen operation working you might want to check our page on using two displays. Keeping each display in different ‘memory access modes’ lets you do some interesting graphical output tricks.  But what if you only have one screen and there just isn’t enough room to display everything at one time?

You can use flag variables &  switch-case to toggle between different screens of information:

//  I use defines in setup{} so the code is more readable, but these are just numbers
#define displaySOILdataNext 0

#define displayTEMPdataNext 1

//  set the ‘first screen’ you want to display before entering the do-while trapping loop
uint8_t next_OLED_info = displaySOILdataNext;

do {       // our processor ‘trapping loop’ described above

// load the screen’s eeprom at the start of the do-while loop
// each successive press of the touch-switch changes which data gets loaded

switch (next_OLED_info{

case
displaySOILdataNext:      // case where next_OLED_info = 0
{
         sendSOILdata2OLED();    // a separate function to program the display memory
         next_OLED_info=displayTEMPdataNext;  //toggles to send different info on next pass
break;
}

case displayTEMPdataNext:    // case where next_OLED_info = 1
{
         sendTEMPdata2OLED();   // a separate function to load the eeprom
         next_OLED_info=displaySOILdataNext;  //toggle to opposite screen on next pass
break;
}
}  // terminator for switch (next_OLED_info)

      {insert here} all the loop content controlling wake/sleep described earlier

} while (rtc_d2_INT0_Flag == false);    // end of our processor ‘trapping loop’


// and you will need stand-alone functions for each screen of information:

void sendTEMPdata2OLED() {
oled.clear();
oled.set2X();
oled.setCursor(0,0);
oled.print(TimeStamp+5);   // +5 skips first 5 characters of the string because it’s too long

oled.set1X();
oled.setCursor(0,2);
oled.print(F(“Temp1: “));
oled.set2X();
oled.setCursor(46,2);
oled.print(ds18b20_short_degC,2);
               // … etc ... add more here until the display is ‘full’
}

void sendSOILdata2OLED() {
oled.clear();
oled.set1X();
oled.setCursor(0,0);
oled.print(F(“RAIL”));
oled.set2X();
oled.setCursor(70,0);
oled.print((float)ads1115_rail*0.000125,3);
                // … etc ... add more here until the display is ‘full’ 
}

The nice thing about this method is that you can toggle your way through as many different screens of information as you need simply by adding more ‘cases’ and screen ‘loading’ functions.