Tag Archives: sensor

How to measure PAR (Photosynthetically Active Radiation) using a BH1750 Lux Sensor

A 3d printed stack of radiation shields goes around the 30mL centrifuge tube housing our 2-module logger. A universal ball joint by DiZopfe was adapted for the leveling mechanism which is critical for the calibration.

Space nerds have an old saying that ‘LEO is half way to the moon…‘ and Arduino hobbyists tend to feel the same way about getting sensor readings displayed on a live IoT dashboard. But that ignores the real work it takes to generate data that’s actually useable. To paraphrase Heinlein: ‘Calibration is half way to anywhere…’ Now that our 2-Part logger is both easy for students to build and robust enough for field use, we can focus on developing sensor calibration methods that are achievable by teachers and researchers in low-resource settings.

Light sensors seem straight forward, with numerous how-to guides at Hackaday, Adafruit, Sparkfun, etc. In reality, light sensors are some of the trickiest ones to actually deploy – which is why so few low-end climate stations include them. This post describes a method for calibrating a Bh1750 lux sensor to estimate Photosynthetically Active Radiation (PAR). Not everyone can afford a LI-COR 190 or Apogee SQ quantum sensor to use as a benchmark, so here we will use a clear-sky model calculation for the cross-calibration despite the dynamic filtering effects of the atmosphere on natural sunlight. Using a diffuser to restore cosign behavior means we can’t calculate PPFD directly from Lux without some y=mx+b coefficients.


Jump links to the sections of this post:


Light Sensor Issue #1: Changing Spectral Distribution

Peak solar irradiance received on any given day varies by latitude and season, as does the overall pattern. Light emitted from the sun has a stable distribution of frequencies, however the spectrum at the earth’s surface varies across the day, with more short wavelengths (blue) around mid day and enriched in longer wavelengths (red) at sunrise & sunset when the rays travel further through the atmosphere. We will avoid this source of error by calibrating with data from the hours around solar noon as determined by the NOAA Solar Calculator. Even with high quality sensors, morning and evening data can be compromised by other factors like condensation which changes the refractive index of lenses and diffusers.

Light Sensor Issue #2: Sensitivity Bands

Average plant response to light as Relative Photosynthetic Efficiency (%) vs Wavelength (nm) compared to Bh1750 Response Ratio vs Wavelength

Lux sensors have a maximum sensitivity near 550nm, mimicking the response of photo-receptors in the human eye. Plants are similarly limited to frequencies that can be absorbed by the various chlorophylls. These two bands have a high degree of overlap so we can avoid the Badder UV/IR-Cut cut filters (420–685nm bandpass) or stack of Roscolux filters that would be needed with photodiodes that respond to a wider range of incoming radiation. The cross-calibration still requires the relative ratio of frequencies within the targeted region to remain stable, so a PAR conversion derived under full sunlight may not be valid under a canopy of tree leaves or for the discontinuous spectra of ‘blurple’ grow-lights.

Light Sensor Issue #3: Dynamic Range

I tested two inexpensive Bh1750 sensor modules, and the diffuser dome that comes with the red ‘Light Ball’ version turned out to be the deciding factor. When powered from a 3v coin cell, these sensors add 8µA to the loggers sleep current if you leave the 622 reg in place and <1µA if you remove it.

Full summer sunlight can exceed 120,000 Lux and there aren’t many sensors in the Arduino ecosystem that handle that entire range. The BH1750 can with registers set to it’s least sensitive configuration. Our logger code already does this because QUALITY_LOW & MTREG_LOW(31) integration take only 16-24 milliseconds, rather than the 120-180ms of power needed for high resolution readings. The data sheet implies that the sensor will flatline before 100,000 lux, but at its lowest sensitivity it delivers reasonable data above 120k, though linearity may be suspect as flux approaches sensor saturation. The sensor also has a maximum operating temperature of 85°C which can be exceeded if your housing suffers too much heat gain. Alternative sensors like the MAX44009, TSL2591 and SI1145 have a similar thermal limits. Like most light sensors, the Bh1750 increases its output readings by a few percent as the sensor warms.

Commercial vs DIY diffusers. Bullseye level indicators are epoxied to the top shield with white JB Marine Weld. The larger 43mm diameter bubble (right) was far more effective than the smaller 15mm (left).

DIY builders often add diffusers made from translucent #7328 Acrylite or PTFE sheets to reduce sunlight intensity into a given sensors range. I tried printing domes with clear PETG and hand sanding them with fine grit to increase the diffusive power. While these did reduce light levels by more than 50%, my DIY diffuser didn’t quite match the smooth overall response seen with the diffusers that came with the round PCB modules. This may have been due to a slight misalignment between the sensor and the focal point of the low-poly dome I could make in Tinkercad. The white dome that comes with the red Bh1750 module reduced peak light levels in full sunlight from the 110k Lux reported by a ‘naked’ sensor to about 40k Lux. Each sensor varied somewhat in its response but I didn’t do any batch testing to quantify this as I was calibrating each sensor directly to the reference model. I initially tried clear JB weld as a sealant but this caused problems: sometimes contracting enough peel parts away from the PCB and yellowing significantly after a couple of weeks of full sun exposure. In later builds I used only a thin coating of silicone conformal, relying on an epoxy seal around the base of the diffuser to provide most of the waterproofing.

Light Sensor Issue #4: Angular Response

Bh1750 Directional Characteristics [Figs 4&5] from the datasheet. Sensor response is different on the two axes so orientation must be labeled on the outside during assembly. The left graph is closer to Lambertian so the sensor gets deployed with its connection pads oriented North – South relative to the suns east-west motion. Based on these curves alone we would expect a ‘naked’ BH sensor to under-report relative to the Lambertian ideal. That is indeed what I observed in our early sensor comparison tests, leading to our selection of round red PCB modules for the calibration because the included diffuser dome compensated nicely.

Lambert’s cosine law describes the relationship between the angle of incidence and the level of illuminance on a flat matte surface as being proportional to the cosine of the zenith angle (as the suns changes position throughout the day). At an incident angle of 60°, the number of photons hitting a sensor surface is half what it would be if the same light source was positioned directly above the sensor. This effect is mathematically predictable, but imperfections, diffraction, and surface reflection means that sensor response tends to diverge from ideal as the angle increases. So manufacturers surround their sensors with raised diffuser edges and recesses on the top surface which change light collection at low sun angles to restore a perfect cosign response. In general, diffusers make the compass orientation of the sensor less likely to interfere with calibration but leveling the sensor is still absolutely required.

Light Sensor Issue #5: Temporal Resolution

Unlike most environmental parameters, light levels can change instantaneously. Most commercial sensors aggregate 1 or 2 second readings into 5 to 15 minute averages. This makes it much easier to estimate energy output from solar panels, or calculate the Daily Light Integral for a crop because both of those use cases are more concerned with area under the curve rather than individual sensor readings. However, in our case of calibrating a sensor against an irradiance model, we must use instantaneous readings so we can exclude data from periods where the variability is high. Averaging would smooth over short term interference from clouds, birds, or overhead wires, potentially leading to bad data in the calibration. We read the BH1750 once per minute at its fastest/lowest resolution.


A Radiation Shield

My original concept was to epoxy the light sensor directly onto the cap and slide separate radiation shields over the housing tube with a friction fit – but that approach suffered excessive heat gain. It took several design iterations to discover that plastics are often transparent to IR – so most of the 3D printed weather station shields you find in maker space won’t work very well. While PLA does block/reflect the visible spectrum, it then re-emits a portion of any absorbed energy as IR which passes right through – turning the central housing tube into a tiny greenhouse. You need to add layers of metal foil to reflect that IR and there must be an air gap between the materials or the heat still crosses by conduction. The process of moving those surfaces away from the logger also meant placing the sensor onto a small raised ‘stage’ that could pass through the upper shield. This allows easier replacement after the sensors expire, or the use of an entirely different sensor without changing the rest of the design. I still don’t know the operating life of these sensors at full sunlight exposure levels.

2″ Aluminum HVAC tape is applied to the IR shield layer. (click to enlarge these photos)
The IR shield slides to about 8mm below the top shield which has holes along the rim to vent heated air.
The sensor stage slides on the vertical rails and passes through the upper shield.
The loggers green cap then pushes the sensor stage into place with a snug click-fit. Foil is wrapped around the logger housing tube.
Three smaller gill shields slide onto the rails, with plenty of aligned holes for vertical airflow through to the top shield.
A lower IR shield is added to the bottom with metal side down to reflect thermal radiation emitted from the ground.

Here are temperature records of two side-by-side loggers with identical 3D-printed shields except that one has the three metal foil layers added and one does not:

Temp (°C) vs Time: Comparison of heat gain with, and without metal foil layers. Measured with the NTC sensor inside the logger housing at the center of the stack. The night time data shows a 0.25°C offset between the two sensors, indicating that they were not normalized before this run.

Interestingly, the 3°C delta seen in my foil vs no foil tests matched the discrepancies identified by Terando et.al in their 2017 paper examining ad hoc Stevenson shields in ecological studies. Air gaps are required for IR reflecting layers to do their job, so most of the foil backed roofing shingles on the market are ineffective because of direct surface contact. Both aluminum and stainless steel foils are common, but aluminum has a lower emissivity than stainless steel, meaning it should reflect more and emit less IR. There are also radiant barrier coating sprays used in industrial settings. High-end weather stations use fan ventilation or helical shields, but those designs may be a bit too complicated for DIY. And even 3D prints from tough materials like PETG or ASA would benefit from coating with something like Krylon UV protectant to extend their lifespan. I’ve also been thinking about adding some infrared cooling paint on the top surface of our weather stations. The challenge with anything that emits in the atmospheres transparency window between wave lengths of 8 and 13 microns is that you get significant accumulation of debris on surfaces in as little as one month of actual deployment: especially in the spring/fall when the surfaces get covered with morning dew which then captures any windborne dust.

I’m still tweaking the shield design as more test data comes in, and hope to compare it to a fan aspirated model soon. Radiation shields are only needed if you want to capture accurate temperatures with the light readings on the same logger. The Bh1750 calibration alone could be done without shields, but mounting the sensor on some kind of flat surface makes it easier to add the required leveling bubble beside the sensor. The tradeoff for preventing solar heat gain is that shields introduce lag in the temperature response.

Pole Mount & Leveling Mechanism

As this is the first of our ‘garden series’ that will be built around the 2-part logger, I created a complete mounting system from a combination of 3D printed parts and PVC pipes. This adjustable leveling mechanism was modified from the Open Source Universal Ball Joint posted on Thingiverse by Arthur ZOPFE.

This socket slides over the end of a 1/2″ PVC pipe. A zip tie through the drilled cross-hole secures the pieces together.
A self standing 30mL centrifuge tube slides snugly into this fitting, again with holes for zip ties.
A large diameter twist ring makes it easy to adjust the sensor assembly while watching the bulls-eye level on the top shield.

This ball & socket approach works well for leveling, but to make the adjustments easier (ie. with less compressive force) I will add an O-ring to the bottom cup for some friction and give.

This ground spike has a foot plate to assist insertion and is asymmetric to provide more contact with the bed. It just barely fits on my Ender3 when printed diagonally. I created this model from scratch in Tinkercad, but the offset idea is not mine. Unfortunately, I saw the original so long ago I don’t know who to credit for it. The pole insert and holes are six-sided because internal 45° slopes can be printed without any supports, and you can simply bridge the internal 1cm top span.

A length of standard 1/2 inch PVC pipe is used for the riser between the spike and the leveling mechanism. Ideal height for temperature sensors is approximately five feet above the ground, usually in a shaded location facing away from the sun.


The Apogee Clear Sky Calculator

With this model we could even attempt a calibration against the shortwave spectrum for a DIY pyranometer, but it’s a much bigger stretch to say the 550nm peak of BH sensitivity is a good proxy for the whole 300 -1300nm band of frequencies.

The Apogee Clear Sky Calculator helps operators of their many light sensor products check if those need to be sent in for re-calibration. When used near solar noon on clear unpolluted days the accuracy is estimated to be ±4 %. We can cross-calibrate the readings from our humble Bh1750 to that model provided we use data from a cloudless day. I’m not sure what the temporal resolution of the ClearSky model is (?) The U.S. Climate Reference Network generally uses two-second readings averaged into five minute values so it is likely that the ClearkSky model has a similar resolution. This model has the best accuracy within one hour of solar noon but we will push that out a few hours so we have enough data for the regression.

We could have used the Bird Clear Sky Model from NREL, with validation against real world data from one of the local SURFRAD stations at NOAA. That data is for full-spectrum pyranometers measuring in W/m2, but you can estimate the PAR as photosynthetic photon flux density (PPFD) from total shortwave radiation using a conversion factor into µmol s-1 m-2. Many solar PV companies provide online calculators for power density that could also be used for this kind of DIY sensor calibration.

Our Deployment Location

Most who live in urban areas are familiar with noise pollution, however it is also hard to find undisturbed light environments. My best option for those critical hours around solar noon was my neighbours backyard garden:

The two sensors here are aligned on the east-west axis so they can be compared.

This location was relatively free of power lines and tree tops, but reflections from that white door caused a slight positive offset in the afternoon. Fences prevented the capture of morning and evening data which would have been interesting. But sunrise to sunset data is not required for our calibration.

The Calibration

After several weeks of logger operation we finally managed to capture data from a beautiful cloud-free day:

2024-07-27: Lux from a diffused ‘Light Ball’ Bh1750 sensor (Orange, left axis @1min) VS ClearSky Model PPFD (Purple/right Axis @ 5min). You can see some stair-stepping in the model data, indicating that it’s temporal resolution might be only 10-15 minutes.

We logged raw single-shot Lux readings at one minute intervals and because there is no averaging applied you can clearly see where overhead lines or birds created occasional short-duration shading. These outliers were excluded before generating the trendline shown below. The PAR values from the model were calculated using the ‘Auto fill’ option for humidity and temperature. On this day solar noon was at 12:57

Linear y=mx+b fit between ClearSkyCalculator PPFD (yAxis) vs Diffused BH1750 Lux (xAxis) using 5 minute data points on 2024-07-27 between 10:00 and 16:00 [bracketing solar noon by three hours]. Two shadow outliers at 10:05 and10:15am were excluded from the dataset.

Aerosols and variations in local temp/humidity produced some scatter but this is a good result for calibration with natural light. The result might be improved by co-deploying a humidity sensor, but it’s not clear to me if humidity at ground level is what the model actually uses for its calculation. Some scatter is also being created by the temporal resolution of the model. Using one type of sensor as a proxy for another limits the scope of the device and we probably approached an accuracy of ±15% at best with this conversion. It’s worth remembering that most commercial light sensors are only calibrated to ±5%.


Discussion

The biggest challenge at our mid-west location was that we had to run the loggers for several weeks before capturing the blue-sky day shown above. Typical time series from that Bh1750 sensor (under a light-reducing diffuser dome) looks like this:

Lux vs Time: 1 minute data captured with our 2-Part logger reading a red ‘light-ball’ Bh1750 module.
This unit had an extra 64k EEprom added to store the large amount of data that was generated.

Clouds often cause light levels to exceed that seen on clear days. This makes sense if you imagine a situation where there are no clouds directly over-head, but radiation reflected from the sides of clouds is reaching the sensor from multiple directions. The fact that clouds at different atmospheric levels have different effects is one of the things that makes climate models so complicated.

The Clear-Sky Calculator lets you generate data for any date/time, so it would be possible to do this calibration by aggregating cloudless periods from multiple days:

Detail of data from 7/15 and 7/12: what you are looking for is the smooth curve that indicates there were no high level clouds causing subtle variations in light level.
Inexpensive (~$60USD) PAR meters have started appearing on Amazon recently. I’m more than a little dubious about the term ‘quantum’ in the marketing (?) as they are probably just a photodiode and some filters

Someone in Nevada would have no trouble gathering this kind of calibration data, but it might not be possible for people living in Washington. A low-cost alternative to using a clear-sky model for the calibration could be to compare the Bh1750 to one of the many smartphone grow light meter apps, with a clip-on diffuser & cosine corrector. Every phone has a different sensor so programs like Photone or PPFDapp usually have their own calibration procedures. While developing this exercise I also found a ‘for parts’ Seaward Solar Survey 100 on eBay for $20, and all it needed to bring it back to life was a good cleaning inside. I also found an old Li-1400 logger with a 190 pyranometer for only $120 and was pleasantly surprised when Apogee’s calculator showed it was still within 5%. As mentioned, you’d need to convert total radiation from those last two into PAR or you could do the calibration to total shortwave. Hardware references that lack logging capability require more effort to gather calibration points, but they save you from having to wait for agreeable weather.

Other projects have built similar sensors and with calibration Lux sensors are comparable to commercial PAR sensors if the spectral environment is consistent. Multi-channel sensors with overlapping frequencies do a better job in situations with discontinuous light sources like those used for indoor growing or for measuring the extinction of PAR frequencies under water. In those cases a TCS3471 (3-channel), AS7341(10-channel), or AS7265 (18-channel) sensor could be used, and finer frequency division can enable calculation of interesting ratios like NDVI or SPAD. Beyond that point you’re entering the realm of diffraction grating spectrometers which allow a more nuanced approach to the spectral function which differs from standard PAR.

And if building your own datalogger is too challenging, you could reproduce the exercise described in this post with a bluetooth UNI-T or a UT381 Digital Luminometer which has some logging capability. But you will need to add extra diffusers to bring full sunlight down below its 20,000 Lux limit.


NREL Bird Clear Sky Model
Clear Sky Calculator from Apogee Instruments
NOAA SURFRAD data from irradiance measuring stations
Downloading from the National Solar Radiation Database.
Shortwave Radiation by Steve Klassen & Bruce Bugbee
Fondriest Solar Radiation & Photosynthetically Active Radiation
Designing a Low-Cost Autonomous Pyranometer by Peter van der Burgt
Various DIY PAR meter discussions at Planted Tank
Build Your Own Pyranometer by David Brooks
Ad hoc instrumentation methods in ecological studies produce biased temperature measurements. Terando et al. (2017)
Choosing Standard Bulbs for DIY PAR meter calibration
Daily Light Integral requirements for different plants.
PARbars: Cheap, Easy to Build Ceptometers
Creating a Normalized Vegetation Index Sensor with two LEDs
Hacking the Rubisco enzyme boosts crop growth 40%
Plants recycle UV into red light
How to calibrate NTC thermistors
How to build our 2-Part ProMini Data Logger

How to Normalize a Group of Pressure Sensors so they can be Deployed as a Set

Once your project starts to grow it’s common to have multiple different sensors, from different vendors, measuring the same environmental parameter. Ideally, those sensors would produce the same readings but in practice there are significant offsets. Datasheets for the MS5837-02BA and MS5803-14BA that we will compare in this post claim an accuracy of (±0.5mbar) and (±2ºC) for the 2-bar while the 14-bar sensors are only rated to (±20mbar) and (±2ºC). Sensors from Measurement Specialties are directly code compatible so the units here were read with the same Over Sampling settings.

Barometric pressure from a set of nine MS58xx pressure sensors running on a bookshelf as part of normal burn-in testing. The main cluster has a spread of about 10millibar, with one dramatic outlier >20 mbar from the group. These offsets are much wider than the datasheet spec for those 2-bars sensors.

But this is only a starting point: manufacturers have very specific rules about things like the temperature ramps during reflow and it’s unlikely that cheap sensor modules get handled that carefully. Housing installation adds both physical stress and thermal mass which will induce shifts; as can the quality of your supply voltage. Signal conditioning and oversampling options usually improve accuracy, but there are notable exceptions like the BMP/E 280 which suffers from self-heating if you run it at the startup defaults.

As described in our post on waterproofing electronics, we often mount pressure sensors under mineral oil with a nitrile finger cot membrane leading to thermal lag.

Sensors like NTC thermistors are relatively easy to calibrate using physical constants. But finding that kind of high quality benchmark for barometric sensors is challenging if you don’t live near a government-run climate station. So we typically use a normalization process to bring a set of different sensors into close agreement with each other. This is a standard procedure for field scientists, but information on the procedures is hard to find because the word ‘normalization’ means different things in various industry settings. In Arduino maker forums it usually describes scaling the axes from a single accelerometer with (sensor – sensor.min )/( sensor.max – sensor.min ) rather than standardizing a group of different sensors.

When calibrating to a good reference you generally assume that all the error is in your cheap DIY sensor and then do a linear regression by calculating a best fit line with the trusted data on they Y axis of a scatter plot.  However, even in the absence of a established benchmark you can use the same procedure with a ‘synthetic’ reference created by drawing an average from your group of sensors:

Note: Sensor #41 was the dramatic outlier more than 20millibar from the group (indicating a potential hardware fault) so this data is not include in the initial group average.

With that average you calculate y = Mx + B correction constants using Excel’s slope & intercept functions. Using these formulas lets you copy/paste equations from one data column to the next which dramatically speeds up the process when you are working through several sensors at a time. It also recalculates those constants dynamically when you add or delete information:

The next step is to calculate the difference (residuals) between the raw sensor data and the average: before and after these Y=Mx+B corrections have been applied to the original pressure readings. These differences between the group average and an individual sensor should be dramatically reduced by the Mx+b adjustments:

After you copy/paste these calculations to each sensor, create x/y scatter plots of the residuals so you can examine them side-by-side:

Now we can deal with the most important part of the entire process: Normalization with bad input data will produce even more misleading results. While the errors shown above are centered around zero, the patterns in these graphs indicate that we are not finished. In the ideal case, residuals should usually be soft fuzzy distributions with no observable patterns. But here we have a zigzag that is showing up for most of the sensors. This is an indication that one (or more) of the sensors included in the average has some kind of problem. Scrolling further along the columns identifies the offending sensors with nasty looking residual plots after the corrections have been applied:

Sensor #41 (far right) was already rejected from the general average because of its enormous offset, but the high amplitude jagged residual plots indicate that the data from sensors #45 and #42 are also suspect. If we eliminate those two from the reference average the zigzag pattern disappears from the rest of the sensors in the set:

There’s more we could learn from the residual distributions, but here we’ve simply used them to prune our reference data, preventing bad sensor input from harming the the average we use for our normalization.

And what do the sensor plots look like after the magic sauce is applied?

The same set of barometric pressure sensors, before and after normalization corrections. (minus #41 which could not be corrected)

It’s important to note that there is no guarantee that fitting your sensors to an average will do anything to improve accuracy. However, sensors purchased from different vendors, at different times, tend to have randomly distributed offsets. In that case normalization improves both precision and accuracy, but the only way to know if that has happened is to validate against some external reference like the weather station at your local airport. There are several good long term aggregators that harvest METAR data from these stations like this one at Iowa State, or you can get the most recent week of data by searching for your local airport code at weather.gov

METAR is a format for weather reporting that is predominately used for pilots and meteorologists and they report pressure adjusted to ‘Mean Sea Level’. So you will have to adjust your data to MSL (or reverse the correction on the airport data) before you can compare it to the pressure reported by your local sensors. For this you will also need to know the exact altitude of your sensors when the data was gathered to remove the height offset between your location and the airport stations.

Technically speaking, you could calibrate your pressure sensors directly to those official sources. However there are a lot of Beginner, Intermediate and Advanced details to take care of. Even then you still have to be close enough to know both locations are in the same weather system.

Here I’m just going to use the relatively crude adjustment equation:
Station Pressure = SLP – (elevation/9.2) and millibar = inchHg x 33.8639 to see if we are in the ballpark.

Barometric data from the local airport (16 miles away) overlayed on our normalized pressure sensors. It’s worth noting that the airport data is at a strange odd-minute intervals, with frequent dropouts which would complicate a calibration to that reference.

Like most pressure sensors an MS58xx also records temperature because it needs that for internal calculation. So we can repeat the entire process with the temperature readings from this sensor set:

Temperatures °C from a set of MS58xx Pressure sensors: before & after group normalization. Unlike pressure, this entire band was within the ±2ºC specified in the datasheet.

These sensors were sitting pretty far back on a bookshelf that was partly enclosed, so some of them were quite sheltered while others were exposed to direct airflow. So I’m not bothered by the spikes or the corresponding blips in those residual plots. I’m confident that if I had run this test inside a thermally controlled environment (ie: a styrofoam cooler with a small hole in the top) the temperature residuals would have been well behaved.

One of the loggers in this set had a calibrated NTC thermistor onboard. While this sensor had significant lag because it was located inside the housing, we can still use it to check if the normalized temperatures benefit from the same random distribution of errors that were corrected so nicely by the pressure normalization:

Once again, we have good alignment between a trusted reference (in red) and our normalized sensors.

Comments:

Normalization is a relatively low effort way to improve sets of sensors – and it’s vital if you are monitoring systems that are driven primarily by gradients rather than absolute values. This method generalizes to many other types of sensors although a simple y=Mx +B approach usually does not handle exponential sensors very well. As with calibration, the data set used for normalization should span the range of values you expect to gather with the sensors later on.

The method described here only corrects differences in Offset [with the B value] & Gain/Sensitivity [the M value] – more complex methods are needed to correct non-linearity problems. To have enough statistical power for accuracy improvement you want a batch of ten or more sensors and it’s a good idea to exclude data from the first 24 hours of operation so brand new sensors have time to settle. Offsets are influenced by several factors and some sensors need to ‘warm up’ before they can be read. The code driving your sensors during normalization should be identical to the code used to collect data in the field.

All sensor parameters drift so, just like calibration, normalization constants have a shelf life. This is usually about one year, but can be less than that if your sensors are deployed in harsh environments. Fortunately this kind of normalization is easy to redo in the field, and it’s a good way to spot sensors that need replacing. You could also consider airport/NOAA stations as stable references for drift determination.


References & Links:

Decoding Pressure @ Penn State
Environmental Mesonet @ Iowa State
Calibrating your Barometer: Part1, Part2 & Part3
How to Use Air Sensors: Air Sensor Guidebook
ISA Standard Atmosphere calculator
Starpath SLP calculator
SensorsONE Pressure Calculators
Mean Sea Level Pressure converter

How do you deal with I2C bus resistance/capacitance issues with so many sensors connected?

I have to add a special mention here of the heroic effort by liutyi comparing different temp. & humidity sensors. While his goal was not normalization, the graphs clearly demonstrate how important that would be if you were comparing a group of sensors. Humidity sensors have always been a thorn in our side – both for lack of inter-unit consistency and because of their short lifespan in the field relative to other types of sensors. The more expensive Sensirons tend to last longer – especially if they are inside one of those protective shells made from sintered metal beads. KanderSmith also did an extensive comparison of humidity sensors with more detailed analysis of things like sensor response time.

You can use the map function to normalize range sensors where both the upper and lower bounds of the sensor varies. And you can use Binary Saturated Aqueous Solutions as standards.

DIY Data Logger Housing from PVC parts (2020 update)

Basic concept: two table leg caps held together with 3″ 1/4-20 bolts & 332 EPDM o-ring. Internal length is 2x3cm for the caps + about 5mm for each o-ring. SS bolts work fine dry, but we use nylon in salt water due to corrosion; tightening the bolts enough that the o-rings will expand to compensate for nylons 2-3% length expansion when hydrated. PVC is another good bolt material option if you deploy in harsh environments.

We’ve been building our underwater housings from 2″ Formufit Table Screw Caps since 2015. Those housings have proven robust on multi year deployments to 50m. While that’s a respectable record for DIY kit, we probably over-shot the mark for the kind of surface & shallow water environments that typical logger builders are aiming for.

The additional RTC & SD power control steps that we’ve added to the basic ‘logger stack’ since 2017 are now bringing typical sleep currents below 25μA.  So the extra batteries our original ‘long-tube’ design can accommodate are rarely needed. (described in Fig. A1 ‘Exploded view’ at the end of the Sensors paperIn fact, pressure sensors often expire before power runs out on even a single set of 2xAA lithium cells.

This raises the possibility of reducing the overall size of the housing, while addressing the problem that some were having drilling out the slip ring in that design. Any time I can reduce the amount of solvent welding is an improvement, as those chemicals are nasty:
(click to enlarge)

Basic components of the  smaller 2020 housing cost about $10. O-rings shown  are 332 3/16″width 2 3/8″ID x 2 3/4″OD EPDM (or other compound )

Double sided tape attaches a 2xAA battery pack to the logger stack from 2017 ( w MIC5205 reg. removed, unit runs on 2x Lithium AA batteries)

The o-ring backer tube does not need to be solvent welded. Cut ~5cm for 1-ring build, & 5.5cm for a 2-ring. Leaving ~1.5cm head-space for wires in the top cap.

The logger stack fits snugly into the 5.5cm backer tube with room for a 2 gram desiccant pack down the side.

The screw-terminal board is only 5.5cm long, but the 2x AA battery stack is just under 6cm long.  With shorter AAA cells you can use only one o-ring.

With several 4-pin Deans micro-plug breakouts & AA batteries things get a bit tight with one o-ring. So I add a second o-ring for more interior space.

Sand away any logos or casting sprues on the plugs & clamp the pass-through fitting to the upper cap for at least 4 hours to make sure the solvent weld is really solid. (I usually leave them overnight) Then wet-sand the large O-ring seat to about 800 grit.  Sensor connections are threaded 1/2″ NPT, but I use a slip fit for the indicator LED, which gets potted in clear Loctite E30-CL epoxy w silica desiccant beads as filler. Most clear epoxies will yellow over time with salt water exposure, so for optical sensors or display screens I usually add an acrylic disk at the upper surface.

The only real challenge in this build is solvent welding the pass through ports. In the 2017 build video we describe connectors with pigtails epoxied to the housing.  But you don’t necessarily need that level of hardening for shallow / surface deployments. The potted sensor connections shown in the video (& our connectors tutorial) can be threaded directly to the logger body via 1/2 threaded NTP male plugs. 

Note: Position the NPT risers on the caps directly opposite the bolt struts, and as near to the edges of the cap as you can so that there is enough separation distance to spin the lock down nuts on your sensor dongles. In the photos below I had the pass-through in line with the struts, but with long bolts this may limit your finger room when tightening the sensor cable swivel nuts. These direct-to-housing connections do make the unit somewhat more vulnerable to failures at the cone washer, or cuts in the PUR insulating jacket of the sensor dongle.

Threaded bulkhead pass-throughs get drilled out with a 1/2″ bit. Alignment with bolt struts shown here is suboptimal.

This closeup shows a slight gap near the center – I could have done a better job sanding the base of the NPT to make it completely flat before gluing & clamping!

the pass through style sensor cap mates to the the lower half of the housing. We’ve always used our o-rings “dry” on these pvc housings.

I describe the creation of the sensor dongles with pex swivel connectors in the 2017 build video series.

Dongle wires need to be at least 6cm long to pass completely through the cap.

“2-Cap” housing: Aim for 5 to 15% o-ring compression but stop if there is too much bending in the PVC struts.

It’s also worth noting that there are situations where it’s a good idea to have another connector to break the line between the sensor and the logger. (shown in 2017)  We often mount rain gauges on top of buildings with 10-20m of cable – so we aren’t going to haul the whole thing in just to service the logger. But on-hull connections like the ones shown with this new housing necessarily open the body cavity to moisture when you disconnect a sensor, and nothing makes a tropical rainstorm more likely to occur during fieldwork than disconnecting the loggers that were supposed to be measuring rainfall.

With a double o-ring and additional seal(s) in the cap, we probably won’t be deploying this new design past ~10m. Given how quickly they can be made, this short body will be a standard for the next few years; perhaps by then those fancy resin printers will be cheap enough for regular DIY builders to start using them – at least for shallow water work.  For now we’ll continue with the long body style for deeper deployments or remote locations that we might not get to again for a long time. The second o-ring is not really necessary if you make a nice tight stack when you assemble the logger.

In general I’d say these ‘plumbing part’ housings reach their long term deployment limit at about ~60m because the the flat end caps starts blowing noticeably at that depth. That overlaps nicely with the limit of standard sport diving, but if your research needs more depth it’s worth looking into the aluminum body tubes/endcaps becoming available in the ROV market. As an example:  Blue robotics makes some interesting enclosures if you need clear acrylic endcaps for camera based work.

(UPDATE: the double o-ring shown in the photo above was required when using 3.5″ bolts. That was a mistake as they tended to extrude easily.  Using shorter 3″ bolts lets you go with only a single o-ring which is gives you a solid seal with no accidental extrusions.)

Addendum 2023-05-25

We needed a way to see how far we could take the new falcon tube loggers and water filter housings are a good solution as the domestic water pressure range of 40-80psi overlaps nicely with sport diving depths. The internal clearance of the filter housing we used is slightly larger than 4.5″ x 9.5″ so could accommodate these older PVC style housings as well:
https://thecavepearlproject.org/2023/05/24/a-diy-pressure-chamber-to-test-housings/

A household water filters make a good low-range pressure chamber.

How to make Resistive Sensor Readings with DIGITAL I/O pins

AN685 figure 8: The RC rise-time response of the circuit allows microcontroller timers to be used to determine the relative resistance of the NTC element.

For more than a year we’ve been oversampling the Arduino’s humble ADC to get >16bit ambient temperature readings from a 20¢ thermistor. That pin-toggling method is simple and delivers solid results, but it requires the main CPU to stay awake long enough to capture multiple readings for the decimation step. (~200 miliseconds @ 250 kHz ADC clock) While that only burns 100 mAs per day with a typical 15 minute sample interval, a read through Thermistors in Single Supply Temperature Sensing Circuits hinted that I could ditch the ADC and read those sensors with a pin-interrupt method that would let me sleep the cpu during the process. An additional benefit is that the sensors would draw no power unless they were actively being read.

Given how easy it is drop a trend-line on your data, I’ve never understood why engineers dislike non-linear sensors so much. If you leave out the linearizing resistor you end up with this simple relationship.

The resolution of time based methods depends on the speed of the clock, and Timer1 can be set with a prescalar = 1; ticking in step with the 8 mHz oscillator on our Pro Mini based data loggers. The input capture unit can save Timer1’s counter value as soon as the voltage on pin D8 passes a high/low threshold. This under appreciated feature of the 328p is more precise than typical interrupt handling, and people often use it to measure the time between rising edges of two inputs to determine the pulse width/frequency of incoming signals.

You are not limited to one sensor here – you can line them up like ducks on as many driver pins as you have available.  As long as they share that common connection you just read each one sequentially with the other pins in input mode. Since they are all being compared to the same reference resistor, you’ll see better cohort consistency than you would by using multiple series resistors. At 8Mhz & 3.3v you get an ICU count of about 3500 for the 10k reference resistor.

Using 328p timers is described in detail at Nick Gammons Timers & Counters page, and I realized that I could tweak the method from ‘Timing an interval using the input capture unit’ (Reply #12) so that it recorded only the initial rise of an RC circuit.  This is essentially the same idea as his capacitance measuring method except that I’m using D8’s external pin change threshold at 0.66*Vcc rather than a level set through the comparator. That’s almost the same as one RC time constant (63%) but the actual level doesn’t matter so long as it’s consistent between the two consecutive reference & sensor readings. It’s also worth noting here that the method  doesn’t require an ICU peripheral – any pin that supports a rising/ falling interrupt can be used. (see addendum for details) It’s just that the ICU makes the method more precise which is important with small capacitor values. (Note that AVRs have a Schmitt triggers on  the digital GPIO pins. This is not necessarily true for other digital chips. For pins without a Schmitt trigger, this method may not give consistent results)

Using Nicks code as my guide, here is how I setup the Timer1 with ICU:

#include <avr/sleep.h>              //  to sleep the processor
#include <avr/power.h>            //  for peripherals shutdown
#include <LowPower.h>            //  https://github.com/rocketscream/Low-Power

float referencePullupResistance=10351.6;   // a 1% metfilm measured with a DVM
volatile boolean triggered;
volatile unsigned long overflowCount;
volatile unsigned long finishTime;

ISR (TIMER1_OVF_vect) {  // triggers when T1 overflows: every 65536 system clock ticks
overflowCount++;
}

ISR (TIMER1_CAPT_vect) {     // transfers Timer1 when D8 reaches the threshold
sleep_disable();
unsigned int timer1CounterValue = ICR1;       // Input Capture register (datasheet p117)
unsigned long overflowCopy = overflowCount;

if ((TIFR1 & bit (TOV1)) && timer1CounterValue < 256){    // 256 is an arbitrary low value
overflowCopy++;    // if “just missed” an overflow
}

if (triggered){ return; }  // multiple trigger error catch

finishTime = (overflowCopy << 16) + timer1CounterValue;
triggered = true;
TIMSK1 = 0;  // all 4 interrupts controlled by this register are disabled by setting zero
}

void prepareForInterrupts() {
noInterrupts ();
triggered = false;                   // reset for the  do{ … }while(!triggered);  loop
overflowCount = 0;               // reset overflow counter
TCCR1A = 0; TCCR1B = 0;     // reset the two (16-bit) Timer 1 registers
TCNT1 = 0;                             // we are not preloading the timer for match/compare
bitSet(TCCR1B,CS10);           // set prescaler to 1x system clock (F_CPU)
bitSet(TCCR1B,ICES1);          // Input Capture Edge Select ICES1: =1 for rising edge
// or use bitClear(TCCR1B,ICES1); to record falling edge

// Clearing Timer/Counter Interrupt Flag Register  bits  by writing 1
bitSet(TIFR1,ICF1);         // Input Capture Flag 1
bitSet(TIFR1,TOV1);       // Timer/Counter Overflow Flag

bitSet(TIMSK1,TOIE1);   // interrupt on Timer 1 overflow
bitSet(TIMSK1,ICIE1);    // Enable input capture unit
interrupts ();
}

With the interrupt vectors ready, take the first reading with pins D8 & D9 in INPUT mode and D7 HIGH. This charges the capacitor through the reference resistor:

//========== read 10k reference resistor on D7 ===========

power_timer0_disable();    // otherwise Timer0 generates interrupts every 1us

pinMode(7, INPUT);digitalWrite(7,LOW);     // our reference
pinMode(9, INPUT);digitalWrite(9,LOW);     // the thermistor
pinMode(8,OUTPUT);digitalWrite(8,LOW);  // ground & drain the cap through 300Ω
LowPower.powerDown(SLEEP_30MS, ADC_OFF, BOD_ON);  //overkill: 5T is only 0.15ms

pinMode(8,INPUT);digitalWrite(8, LOW);    // Now pin D8 is listening

set_sleep_mode (SLEEP_MODE_IDLE);  // leaves Timer1 running
prepareForInterrupts ();
noInterrupts ();
sleep_enable();
DDRD |= (1 << DDD7);          // Pin D7 to OUTPUT
PORTD |= (1 << PORTD7);    // Pin D7 HIGH -> charging the cap through 10k ref

do{
interrupts ();
sleep_cpu ();                //sleep until D8 reaches the threshold voltage
noInterrupts ();
}while(!triggered);      //trapped here till TIMER1_CAPT_vect changes value of triggered

sleep_disable();           // redundant here but belt&suspenders right?
interrupts ();
unsigned long elapsedTimeReff=finishTime;   // this is the reference reading

Now discharge and then repeat the process a second time with D7 & D8 in INPUT mode, and D9 HIGH to charge the capacitor through the thermistor:

//==========read the NTC thermistor on D9 ===========

pinMode(7, INPUT);digitalWrite(7,LOW);     // our reference
pinMode(9, INPUT);digitalWrite(9,LOW);     // the thermistor
pinMode(8,OUTPUT);digitalWrite(8,LOW);  // ground & drain the cap through 300Ω
LowPower.powerDown(SLEEP_30MS, ADC_OFF, BOD_ON);

pinMode(8,INPUT);digitalWrite(8, LOW);    // Now pin D8 is listening

set_sleep_mode (SLEEP_MODE_IDLE);
prepareForInterrupts ();
noInterrupts ();
sleep_enable();
DDRB |= (1 << DDB1);          // Pin D9 to OUTPUT
PORTB |= (1 << PORTB1);    // set D9 HIGH -> charging through 10k NTC thermistor

do{
interrupts ();
sleep_cpu ();
noInterrupts ();
}while(!triggered);

sleep_disable();
interrupts ();
unsigned long elapsedTimeSensor=finishTime;   //this is your sensor reading

Now you can determine the resistance of the NTC thermistor via the ratio:

unsigned long resistanceof10kNTC=
(elapsedTimeSensor * (unsigned long) referencePullupResistance) / elapsedTimeReff;

pinMode(9, INPUT);digitalWrite(9,LOW);pinMode(7, INPUT);digitalWrite(7,LOW);
pinMode(8,OUTPUT);digitalWrite(8,LOW);  //discharge the capacitor when you are done

The integrating capacitor does a fantastic job of smoothing the readings and getting the resistance directly eliminates 1/2 of the calculations you’d normally do with a thermistor. To figure out your constants, you need to know the resistance at three different temperatures. These should be evenly spaced and at least 10 degrees apart with the idea that your calibration covers the range you expect to use the sensor for. I usually put loggers in the refrigerator & freezer to get points with enough separation from normal room temp with the thermistor taped to the surface of a si7051.  Then plug those values into the thermistor calculator provided by Stanford Research Systems.

Just for comparison I ran a few head-to-head trials against my older dithering/oversampling method:

°Celcius vs time [1 minute interval]  si7051 reference [0.01°C 14-bit resolution, 10.8 msec/read ] vs. Pin-toggle Oversampled 5k NTC vs. ICUtiming 10k NTC.   I’ve artificially separated these curves for visual comparison, and the 5K was not in direct contact with the si7051 ±0.1 °C accuracy reference, while the 10k NTC was taped to the surface of chip – so some of the 5ks offset is an artifact. The Timer1 ratios delver better resolution than 16-bit (equivalent) oversampling in 1/10 the time.

There are a couple of things to keep in mind with this method:

si7051 reference temp (C) vs 10k NTC temp with with a ceramic 106 capacitor. If your Ulong calculations overflow at low temperatures like the graph above, switch to doing the division before the multiplication or use larger ‘long-long’ variables. Also keep in mind that the Arduino will default to 16bit calculations unless you set/cast everything to longer ints Or you could make you life easy and save the raw elapsedTimeReff & elapsedTimeSensor values and do the calculations later in Excel. Whenever you see a sudden discontinuity where the result of a calculation suddenly takes a big jump to larger or smaller values – then you should suspect a variable type/cast error.

1) Because my Timer1 numbers were small with a 104 cap I did the multiplication before the division. But keep in mind that this method can easily generate values that over-run your variables during calculationUlong MAX is 4,294,967,295  so the elapsedTimeSensor reading must be below 429,496 or the multiplication overflows with a 10,000 ohm reference. Dividing that by our 8mHz clock gives about 50 milliseconds. The pin interrupt threshold is reached after about one rise-time constant so you can use an RC rise time calculator to figure out your capacitors upper size limit. But keep in mind that’s one RC at the maximum resistance you expect from your NTC – that’s the resistance at the coldest temperature you expect to measure as opposed to its nominal rating.  (But it’s kind of a chicken&egg thing with an unknown thermistor right?  See if you can find a manufacturers table of values on the web, or simply try a 0.1uF and see if it works).  Once you have some constants in hand Ametherm’s Steinhart & Hart page lets you check the actual temperature at which your particular therm will reach a given resistance.  Variable over-runs are easy to spot because the problems appear & then disappear whenever some temperature threshold is crossed. I tried to get around this on a few large-capacitor test runs by casting everything to float variables, but that lead to other calculation errors.

(Note: Integer arithmetic on the Arduino defaults to 16 bit & never promotes to higher bit calculations, unless you cast one of the numbers to a high-bit integer first. After casting the Arduino supports 64-bit “long long” int64_t & uint64_t integers for large number calculations but they do gobble up lots of program memory space – typically adding 1 to 3k to the compiled size. Also Arduino’s printing function can not handle 64 bit numbers, so you have to slice them into smaller pieces before using any .print functions

2) This method works with any kind of resistive sensor, but if you have one that goes below ~200 ohms (like a photoresistor in full sunlight) then the capacitor charging could draw more power than you can safely supply from the D9 pin.  In those cases add a ~300Ω resistor in series with your sensor to limit the current, and subtract that value from the output of the final calculation. At higher currents you’ll also have voltage drop across the mosfets controlling the I/O pins (~40Ω on a 3.3v Pro Mini), so make sure the calibration includes the ends of your range.

There are a host of things that might affect the readings because every component has  temperature, aging, and other coefficients, but for the accuracy level I’m after many of those factors are absorbed into the S&H coefficients. Even if you pay top dollar for reference resistors it doesn’t necessarily mean they are “Low TC”. That’s why expensive resistors have a temperature compensation curve in the datasheet. What you’re talking about in quality references is usually low long-term drift @ a certain fixed temperature (normally around 20 ~ 25°C) so ‘real world’ temps up at 40°C are going to cause accelerated drift.

The ratio-metric nature of the method means it’s almost independent of value of the capacitor, so you can get away with a cheap ceramic cap even though it’s value changes dramatically with temperature. (& also with DC bias ) In my tests thermal variation of a Y5V causes a delta in the reference resistor count that’s about 1/3 the size of the delta in the NTC thermistor.  Last year I also found that the main system clock was variable to the tune of about 0.05% over a 40°C range, but that shouldn’t be a problem if the reference and the sensor readings are taken immediately after one another. Ditto for variations in your supply. None of it matters unless it affects the system ‘between’ the two counts you are using to make the ratio. So the only thing to watch out for is voltage droop on your battery between readings, and we sleep the system to let the cell voltages fully restore before each channel reading.

The significant digits from your final calculation depend on the RC rise time, so switching to 100k thermistors increases the resolution, as would processors with higher clock speeds.  You can shut down more peripherals with the PRR to use less power during the readings as long as you leave Timer1 running with SLEEP_MODE_IDLE.  I’ve also found that cycling the capacitor through an initial charge/discharge cycle (through the 300Ω on D8) improved the consistency of the readings. That capacitor shakedown might be an artifact of the ceramics I was using but you should take every step you can to eliminating errors that might arise from the pre-existing conditions. I’ve also noticed that the read order matters, though the effect is small.

Code consistency is always important with time based calibrations no matter how stable your reference is. Using a smaller integrating capacitor makes it more likely that your calibration constants will be affected by variations in code execution time. Any software scheme is going to show some timing jitter because both interrupts and the loop are subject to being delayed by other interrupts. Noise on the rail from a bad regulator will directly affect your readings. Using a larger 1uF (105) capacitor is a safer option than a 104. This method bakes a heap of small systemic errors into those NTC calibration constants, and this approach works because most of those errors are thermal variations too. However code-dependent variations mess with the fit of the thermistor equation as they tend to be independent of temperature, so make sure the deployment uses EXACTLY the same code that you calibrated with. We are passing current through the thermistor to charge the capacitor so there will inevitably be some self heating – if your calibration constants were derived with 1-pass, then your deployment must also read only one pass. If you calibrate with a [104] ceramic & then switch to a [105] , then the temps recorded with the larger capacitor will be offset to higher than actual. Oversampling works fine to reduce noise or boost resolution, but since it leverages multiple passes that also causes more self heating.

Will this method replace our pin-toggled oversampling?  Perhaps not for something as simple as a thermistor since that method has already proven itself in the real world, and I don’t really have anything better to do with A6 & A7. And oversampling still has the advantage of being simultaneously available on ALL the analog inputs, while the ICU is a limited resource.  Given the high resolution that’s potentially available with the Timer1/ICU combination, I might save this method for sensors with less dynamic range.  I already have some ideas there and, of course, lots more testing to do before I figure out if there are other problems related to this new method.  I still haven’t determined what the long-term drift is for the Pro Mini’s oscillator, and the jitter seen in the WDT experiment taught me to be cautious about counting those chickens.


Addendum:   Using the processors built in pull-up resistors

After a few successful runs, I realized that I could use the internal pull-up resistor on D8 as my reference; bringing it down to only three components. Measuring the resistance of the internal pull-up is simply a matter of enabling it and then measuring the current that flows between that pin and ground. I ran several tests, and the Pro Mini’s the internal pullups were all close to 36,000 ohms, so my reference would become 36k + the 300Ω resistor needed on D8 to safely discharge of the capacitor between cycles. I just have to set the port manipulation differently before the central do-while loop:

PORTB |= (1 << PORTB0);     //  enable pullup on D8 to start charging the capacitor

A comparison of the two approaches:

°Celcius vs time: (Post-calibration, 0.1uF X7R ceramic cap)  si7051 reference [±0.01°C] vs. thermistor ICU ratio w 10k reference resistor charging the capacitor vs. the same thermistor reading calibrated using the rise time through pin D8’s internal pull-up resistor. The reported ‘resistance’ value of the NTC themistor was more than 1k different between the two methods, with the 10k met-film reference providing values closer to the rated spec. However the Steinhart-Hart equation constants from the calibration were also quite different, so the net result was indistinguishable between the two references in the room-temperatures range.

In reality, the pull-up “resistor” likely isn’t a real resistor at all, but an active device made out of transistor(s) which looks like a resistor when operated in its active region. I found the base-line temperature variance to be about 200 ohms over my 40°C calibration range. And because you are charging the capacitor through the reference and through the thermistor, the heat that generates necessarily changes those values during the process.  However when you run a calibration, those factors get absorbed into the S&H coefficients provided you let the system equilibrate during the run.

As might be expected, all chip operation time affects the resistance of the internal pull-up, so the execution pattern of the code used for your calibration must exactly match your deployment code or the calibration constants will give you an offset error proportional to the variance of the internal pull-up caused by the processors run-time. Discharging the capacitor through D8, also generates some internal heating so those (~30-60ms) sleep intervals also have to be consistent.  In data logging applications you can read that reference immediately after a long cool-down period of processor sleep and use the PRR to reduce self-heating while the sample is taken.

Another issue was lag because that pull-up is embedded with the rest of the chip in a lump of epoxy. This was a pretty small, with a maximum effect less than ±0.02°C/minute and I didn’t see that until temperatures fell below -5 Celsius.  Still, for situations where temperature is changing quickly I’d stick with the external reference, and physically attach it to the thermistor so they stay in sync.


Addendum:   What if your processor doesn’t have an Input Capture Unit?

With a 10k / 0.1uF combination, I was seeing Timer1 counts of about 5600 which is pretty close to one 63.2% R * C time constant for the pair.  That combination limits you to 4 significant figures and takes about 2x 0.7msec per reading on average.  Bumping the integrating capacitor up to 1uF (ceramic 105) multiplies your time by a factor of 10 – for another significant figure and an average of ~15msec per paired set readings.  Alternatively, a 1uF or greater capacitor allows you to record the rise times with micros()  (which counts 8x slower than timer1) and still get acceptable results. (even with the extra interrupts that leaving Timer0 running causes…) So the rise-time method could be implemented on processors that lack an input capture unit – provided that they have Schmitt triggers on the digital inputs like the AVR which registers a cmos high transition at ~0.6 * Vcc.

void d3isr() {
triggered = true;
}

pinMode(7, INPUT);digitalWrite(7,LOW);     // reference resistor
pinMode(9, INPUT);digitalWrite(9,LOW);     // the thermistor
pinMode(3,OUTPUT);digitalWrite(3,LOW);  // ground & drain the cap through 300Ω
LowPower.powerDown(SLEEP_30MS, ADC_OFF, BOD_ON);  // 5T is only 1.5ms w 10k

pinMode(3,INPUT);digitalWrite(3, LOW);    // Now pin D3 is listening

triggered = false;
set_sleep_mode (SLEEP_MODE_IDLE);  // leave micros timer0 running for micros()
unsigned long startTime = micros();
noInterrupts ();
attachInterrupt(1, d3isr, RISING); // using pin D3 here instead of D8
DDRD |= (1 << DDD7);          // Pin D7 to OUTPUT
PORTD |= (1 << PORTD7);    // Pin D7 HIGH -> charging the cap through 10k ref
sleep_enable();

do{
interrupts ();
sleep_cpu ();
noInterrupts ();
}while(!triggered);

sleep_disable();
detachInterrupt(1);
interrupts ();
unsigned long D7riseTime = micros()-startTime;

Then repeat the pattern shown earlier for the thermistor reading & calculation. I’d probably bump it up to a ceramic 106 for the micros method just for some extra wiggle room. The method doesn’t really care what value of capacitor you use, but you have to leave more time for the discharge step as the size of your capacitor increases. Note that I’m switching between relatively slow digital writes (~5 µs each) outside the timing loop, and direct port manipulation (~200 ns each) inside the timed sequences to reduce that source of error.

Addendum 20191020:

After running more tests of this technique, I’m starting to appreciate that even on regulated systems, you always have about 10-30 counts of jitter in the Timer1 counts, even with the 4x input capture filter enabled.  I suspect this is due to the Shimitt triggers on the digital pins also being subject to noise/temp/etc and because other system interrupts find a way to sneak in. A smaller 104 integrating capacitor you make your readings 10x faster, but the fixed jitter error is a correspondingly larger percentage of that total reading (100nF typically sees raw counts in the 3000 range for the 10k reference). By the time you’ve over-sampled 104 capacitor readings up to a bit-depth equivalent of the single-pass readings with the 105 ceramic capacitor ( raw counts in the 60,000 range for the same 10k ref), you’ve spent about the same amount of run-time getting to that result. (Keep in mind that even with a pair of single-pass readings using the 10k/104 capacitor combination; raw counts of ~3500 yield a jitter-limited thermistor resolution of about 0.01C)

So, as a general rule of thumb, if your raw Timer1 counts are in the 20,000-60,000 range, you get beautiful smooth results no matter what you did to get there. This translates into about 2.5 – 7.5 milliseconds per read, and this seems to be a kind of ‘sweet-spot’ for ICU based timing methods because the system’s timing jitter error is insignificant at that point. With 5 significant figures in the raw data, the graphs are so smooth they make data from the si7051 reference I’m using look like a scratchy mess.

Another thing to watch out for is boards using temperature compensated oscillators for their system clock. ICU methods work better with the crappy ceramic oscillators on clone boards because their poor thermal behavior just gets rolled into the thermistors S&H constants during calibration. However better quality boards like the 8mhz Ultra from Rocket Scream have compensation circuits that kick in around 20C, which puts a weird discontinuity into the system behavior which can not be gracefully absorbed by the thermistor constants.  So the net result is that you get worse results from your calibration with boards using temperature compensation on their oscillators.

The thermistor constants also neatly absorb the tempco and even offset errors in the reference resistor. So if you are calibrating a thermistor for a given logger, and it will never be removed from that machine, you can set your reference resistor in the code to some arbitrary perfect value like 10,000 ohms, and just let the calibration push any offset between the real reference and your arbitrary value into the S&H constants. This lets you standardize the code across multiple builds if you are not worried about  ‘interchangeability’.

And finally, this method is working well on unregulated systems with significant battery supply variations as I test the loggers down to -15C in my freezer.  In addition to battery droop, those cheap ceramic caps have wicked tempcos, so the raw readings from the reference resistor are varying dramatically during these tests, but the ‘Ratio/Relationship’ of the NTC to this reference is remaining stable over a 30-40C range, with errors in the ±0.1°C range, relative to the reference. (Note: si7051 itself has ±0.13°C, so the net is probably around ±0.25°C)

Addendum 20201011:

Re: the temperature coefficient of resistance (TCR) of your reference resistor:

“Using a thin film resistor at ±10 ppm/°C would result in a 100 ppm (0.01%) error if the ambient changes by only 10°C. If the temperature of operation is not close to the midpoint of the temperature range used to quantify the TCR at ±10 ppm/°C, it would result in a much larger error over higher temperature ranges. A foil resistor would only change 0.0002% to 0.002% over that same 10°C span, depending upon which model is used (0.2 ppm/°C to 2 ppm/°C.) And for larger temperature spans, it would be even more important to use a resistor with an inherently low TCR.”

During calibration, I usually bake the reference resistor’s tempco into the thermistor constants by simply assuming the resistor is a ‘perfect 10k ohm’ during all calculations. (this also makes the code more portable between units)  However this does nothing to correct long-term drift in your reference.  If you want to tackle that problem with a something like a $10 Vishay Z-foil resistor (with life stability of ± 0.005 %) then it’s probably also worth adding Plastic film capacitors which have much better thermal coefficients: Polyphenylene sulfide (PPS ±1.5%) or Polypropylene (CBB or PP ±2.5%)A quick browse around the Bay shows those are often available for less than $1 each, and the aging rate (% change/decade hour) for both of those dielectrics is listed as negligible. The trade off is that they are huge in comparison to ceramics, so you are not going to just sneak one in between the pins on your pro-mini. Be sure to check the rated voltage – and don’t order them if they are rated >100v as the common 650v film caps are too big to be practical on small logger builds.  For the coup de grâce you could correct away the system clock variation by comparing it to the RTC

Addendum 2020-03-31:  Small capacitors make this method sensitive to a noisy rail

After a reasonable number of builds I have finally identified one primary cause of timing jitter with this technique: a noisy regulator. To get sleep current down I replace the stock MIC5205’s on clone ProMini boards with MCP1700’s and I noticed a few from a recent batch of loggers were producing noisy curves on my calibration runs. One of them was extreme, creating >0.5°C of variation in the record:

ICU based Thermistor readings VS si7051 reference. Sensors in physical contact with each other. Y axis = Celsius

But in the same batch, I had others thermistors with less noise than the si7051’s I was using as a reference. All were using small 104 ceramic capacitors for the integration, producing relatively low counts (~3500 clock ticks) on the 10k reference resistor.

For the unit shown above I removed the regulator, and re-ran the calibration using 2x Lithium AA batteries in series to supply the rail voltage. No other hardware was changed:

Same unit, after regulator was removed. Samples taken at 1 minute interval on both runs.

In hindsight, I should have guessed a bad regulator was going to be a problem, as few other issues can cause that much variation in the brief interval between the reference resistor & thermistor readings. Reg. noise/ripple translates instantaneously into a variation in the Schmitt trigger point on the input pin – which affects the ICU’s timer count. It’s possible that this issue could be eliminated with more smoothing so I will try a few caps across the rails on the less problematic units. (~1000µF/108 Tantalums can be found for 50¢ on eBay but I will start with a 10µF/106 & work up from there)

Addendum 2020-04-05:  106 ceramic across rails increased ICU reading noise (w bad reg)

Adding a cheap 106 (Y5V ceramic) across the rails more than doubled the noise in the readings of the NTC with this ICU technique.  This is interesting, as it goes completely against what I was expecting to happen.  Possibly that 10µF cap was just badly sized for this job or had some other interaction via inductance effects that actually accentuated the noise? I probably need a smaller, faster cap for the job.

Changing the sampling capacitor from a 104 (0.1µF) , to a 105 (1µF) dramatically reduced the problem. Not surprising as the rail noise from the regulator is relatively consistent, while the reference timer counts change from ~3500 with a 104 capacitor to ~60,000 with the larger 105. So the jitter is still there, but it is proportionally much smaller. I’m currently re-running the thermistor calibration with that larger capacitor. If the gods are kind, the S&H thermistor constants will be the same no matter what sampling capacitor is used.

It’s worth noting that this issue only appeared with the most recent crop of crappy eBay regulators. But if you are sourcing parts from dodgy vendors, you’d best go with a 105 sampling capacitor right from the start to smooth out that kind of noise

Addendum 2020-04-06:  Re-use old S&H constants after changing the sample capacitor?

After discovering the reg. issue, I re-ran a few thermistor calibrations once the sampling capacitor had been changed from a 104 to a 105:  This reveals that the thermistor constants obtained with a 104 sampling capacitor, still work, but it depends on the tolerance you are aiming for: with the older 104 cap constants drifting over a 40°C range by about ±0.3 Celsius. The extra resolution provided by the larger 105 cap is only useful if you have the accuracy to utilize it (ie: It doesn’t matter if the third decimal point is distinguishable of the whole number is wrong) I generally aim for a maximum of ±0.1°C over that range, so for our research loggers that’s a complete do-over on the calibration. From now on I will only use 105 caps (or larger) with this ICU technique on regulated systems. The battery-only units were smooth as silk with smaller 104 caps because the rail had zero noise.

Addendum 2020-05-21: Using the ADS1115 in Continuous Mode for Burst Sampling

For single resistive sensors, it’s hard to beat this ICU method for elegance & efficiency. However there’s still one sensor situation that forces me to go to an external ADC module: Differential readings on bridge sensors.  In those cases I use an ADS1115 module, which can also generate interrupt alerts for other tricks like ‘event’ triggered sampling.

Addendum 2021-01-24        Don’t sleep during the do-while loop with regular I/O pins.

I’ve been noodling around with other discharge timing methods and come across something  that’s relevant to using these methods on other digital pins. Here’s the schematic from the actual 328 datasheet, with a bit of highlighting added. The green path is PINx.  It’s always available to the databus through the synchronizer (except for in SLEEP mode?) The purple path is PORTx.   Whether or not it is connected to PINx depends on the state of DDRx (which is the yellow path.)

As shown in the figure of General Digital I/O, the digital input signal can be clamped to ground at the input of the Schmitt Trigger. The signal denoted SLEEP in the figure, is set by the MCU Sleep Controller in Power-down mode and Standby mode to avoid high power consumption if some input signals are left floating, or have an analog signal level close to VCC/2.

When sleeping, any GPIO that is not used an an interrupt input has its input buffer disconnected from the pin and in clamped LOW by the MOSFET.

Clearly the ICU on D8 must make it one of those ‘interrupt exceptions’ or the cap charging method would have been grounded out when entering the sleep state.  If you use a similar timing method on normal digital IO pins you can’t sleep the processor in the central do-while loop.

The datasheet also warns that on pins where this is overridden, such as external interrupts, you must ensure these inputs don’t float (page 53). The implication being that when you put anything near 1/2 of the rail voltage on the input’s Schmitt trigger this will cause current to flow into or out of the pin. Yet another source of error that gets rolled into those catch-all S&H constants.

There is a good set of PORT/Pin maps for all 328p family processors at GreyGnome’s EnableInterrupt Github page.

Addendum 2022-03-09: Adding a second resistive sensor

Range switching with two or more NTCs could also be done if the minimum / maximum resistance
values cannot meet your accuracy target with one thermistor.

We usually integrate this ICU method with our low-power 2-part loggers as they have only a single Cr2032 as their power supply. Once the pins are pulled low these resistive sensors add nothing to the sleep current.

With the stability of battery power, a [104] is sufficient for stable ICU counts of about 3000 clock ticks for 10k ohms with an 8Mhz ProMini running at 3.3volts. Adding a second [104] ceramic capacitor parallel to the first doubles that 10k ref. to about 6000 counts. A 5528 LDR drops to about 2200 counts (with 2x 104’s) in ambient room lighting, but it will go to very large resistance at night so we cap the reading at one timer overflow (65534).

For more details see: Powering a ProMini logger for one year on a coin cell.

Addendum 2024-04-20: How to calibrate NTC Thermistors

Calibrating the LDR to Lux with a BH1750 works if you have good light conditions – but that doesn’t always happen in the winter time. We have released a new post that describes how to calibrate the NTC thermistor with inexpensive DIY water baths. Also, recently found another method of reading resistance using a comparator (tip 13.3) in Microchip’s Compiled Tips ‘N Tricks Guide DS01146B

‘No-Parts’ Temperature Measurement with Arduino Pro Mini

328p processor System Clocks & their Distribution pg26

Most micro-controllers use a quartz crystal oscillator to drive the system clock, and their resonant frequency is reasonably stable with temperature variations. In high accuracy applications like real time clocks even that temperature variation can be compensated, and last year I devised a way to measure temperature by comparing a 1-second pulse from a DS3231 to the uncompensated 8Mhz oscillator on a Pro Mini. This good clock / bad clock method worked to about 0.01°C, but the coding was complicated, and it relied on the ‘quality’ of the cheap RTC modules I was getting from fleaBay – which is never a good idea.

But what if you could read temperature better than 0.01°C using the Pro Mini by itself?

Figure 27-34:  Watchdog Oscillator Frequency vs. Temperature. Pg 346 (Odd that the frequency is listed as 128kHz on pg55?)  Variation is ~100 Hz/°C

The 328P watchdog timer is driven by a separate internal oscillator circuit running at about 110 kHz. This RC oscillator is notoriously bad at keeping time, because that on-chip circuit is affected by external factors like temperature. But in this particular case, that’s exactly what I’m looking for.  The temperature coefficient of crystal resonators is usually quoted at 10–6/°C and for RC oscillation circuits the coefficient is usually somewhere between 10–3/°C to 10–4/°C.  There’s plenty of standard sensors don’t give you a delta that large to play with!

To compare the crystal-driven system clock to the Watchdogs unstable RC oscillator I needed a way to prevent the WDT from re-starting the system.  Fortunately you can pat the dog and/or disable it completely inside its interrupt vector:

volatile boolean WDTalarm=false;
ISR(WDT_vect)
{
wdt_disable();  // disable watchdog so the system does not restart
WDTalarm=true;    // flag the event
}


SLEEP_MODE_IDLE
leaves the timers running, and they link back to the system clock.  So you can use micros() to track how long the WDT actually takes for a given interval.  Arduinos Micros() resolution cannot be better than 4 microseconds (not 1 µs as you’d expect) because of the way the timer is configured, but that boosts our detectable delta/° by a factor of four, and the crystal is far more thermally stable than the watch-dog. It’s worth noting that timer0 (upon which micros() depends) generates interrupts all the time during the WDT interval, in fact at the playground they suggest that you have to disable timer0 during IDLE mode sleeps. But for each time interval, the extra loops caused by those non-WDT interrupts create a consistant positive offset, and this does not affect the temperature related delta.

WDTalarm=false;
// Set the Watchdog timer                    from: https://www.gammon.com.au/power
byte interval =0b000110;  // 1s=0b000110,  2s=0b000111, 4s=0b100000, 8s=0b10000
//64ms= 0b000010, 128ms = 0b000011, 256ms= 0b000100, 512ms= 0b000101
noInterrupts
();
MCUSR = 0;
WDTCSR |= 0b00011000;    // set WDCE, WDE
WDTCSR = 0b01000000 | interval;    // set WDIE & delay interval
wdt_reset();  // pat the dog
interrupts ();
unsigned long startTime = micros();
while (!WDTalarm)  {    //sleep while waiting for the WDT
set_sleep_mode (SLEEP_MODE_IDLE);
noInterrupts ();  sleep_enable();  interrupts ();  sleep_cpu ();
sleep_disable();  //processor starts here when any interrupt occurs
}
unsigned long WDTmicrosTime = micros()-startTime;  // this is your measurement!

The while-loop check is required to deal with the system interrupts that result from leaving the micros timer running, otherwise you never make it all the way through the WDT interval. I haven’t yet figured out how many interrupts you’d have to disable to get the method working without that loop.

To calibrate, I use my standard refrigerator->freezer->room sequence for a repeatable range >30°.  Since the mcu has some serious thermal lag, the key is doing everything VERY SLOWLY with the logger inside a home made “calibration box” made from two heavy ceramic pots, with a bag of rice between them to add thermal mass:

1sec WDT micros() (left axis) vs si7051 °C Temp (right axis) : Calibration data selected from areas with the smallest change/time so that the reference and the 328p have equilibrated.

If you use reference data from those quiescent periods, the fit is remarkably good:

si7051 reference temperature vs 1 sec WDT micros() : A fit this good makes me wonder if the capacitor on the xtal oscillator is affected the same way as the capacitor in the watchdogs RC oscillator, with the net result being improved linearity. In this example, there was a constant over-count of 100,000 microseconds / 1-second WDT interval.

I’m still experimenting with this method, but my cheap clone boards are delivering a micros() delta > 400 counts /°C with a one second interval – for a nominal resolution of ~0.0025°C.  Of course that’s just the raw delta.  When you take that beautiful calibration equation and apply it to the raw readings you discover an inter-reading jitter of about 0.1°C – and that lack of precision becomes the ‘effective’ limit of the resolution.  It’s going to take some serious smoothing to get that under control, and I’ll be attacking the problem with my favorite median filters over the next few days. I will also see if I can reduce it at the source by shutting down more peripherals and keeping an eye on stray pin currents.

Noise from si7051 reference (red)  vs Cal. equation applied to raw WDT micros readings (blue). 

Doubling the interval cuts the the noise and the apparent resolution in half, and if you are willing to wait around for the watchdogs 8-second maximum you can add an order of magnitude. Of course you could also go in the other direction: a quarter second WDT interval would deliver ~100 counts/°C, which still gets you a nominal 0.01°C though the jitter gets worse. Note that you can’t use the ‘b’ coefficients from one interval to the next, because of the overhead caused by the non-WDT interrupts. That “awake time” must also be contributing some internal chip heating.

The si7051 reference sensor needs to be held in direct physical contact with the surface of the mcu during the room->fridge->freezer calibration; which takes 24 hours. Since my ref is only ± 0.1ºC accuracy, calibrations based on it are probably only good to about ± 0.2ºC.

There are a few limitations to keep in mind, the biggest being that messing with WDT_vect means that you can’t use the watchdog timer for it’s intended purpose any more.  (interferes with RocketScream’s lowpower library) The other big limitation is that you can only do this trick on a voltage regulated system, because RC oscillators are affected by the applied voltage, though in this case both oscillators are exposed to whatever is on the rail, so a bit more calibration effort might let you get away with a battery driven system. (and now that I think about that… if you did the temp calibration while the system was regulated, you might then also be able to derive the system voltage from the two oscillators while running unregulated.) 

Self-heating during normal operation means that this method will not be accurate unless you take your temperature readings after waking the processor from about 5-10 minutes of power-down sleep. The mass of the circuit board means that the mcu will always have significant thermal lag. So there is no way to make this method work quickly and any non-periodic system interrupts will throw off your micros() reading.

Every board has a different crystal/capacitor/oscillator combination, so you have to re-calibrate for each one. Although the slopes are similar, I’ve also found that the raw readings vary by more than ±10k between different Pro Minis for the same 1sec WDT interval, at the same temperature. The silver lining there is that the boards I’m using probably have the cheapest parts available, so better quality components could boost the accuracy , though I should insert the usual blurb here that resolution and accuracy are not the same thing at all.  I haven’t had enough time yet to assess things like drift, or hysteresis beyond the thermal lag issue, but those are usually less of a problem with quality kit. If your board is using Y5V caps it probably won’t go much below -15°C before capacitor failure disrupts the method.

It’s also worth noting that many sleep libraries, like Rocketscreem’s Lowpower lib, do their own modifications to the watchdog timer, so this method won’t work with them unless you add the flag variable to their modified version of the WDT_vect.  To add this technique to the base code for our 1-hour classroom logger, I’ll will have to get rid of that library dependency.

Where to go from here:

  1. Turning off peripherals with PRR can save power and reduce heating during the interval.
  2. Switching from micros(), to timer based overflows could increase the time resolution to less than 100 ns;  raising nominal thermal resolution.
  3. Clocking the system from the DS3231’s temperature compensated 32khz output could give another 100 counts/°C and improve the thermal accuracy. My gut feeling is the noise would also be reduced, but that depends on where it’s originating.

Despite the limitations, this might be the best “no-extra-parts” method for measuring temperature that’s possible with a Pro Mini, and the method generalizes to every other micro-controller board on the market provided they have an independent internal oscillator for the watchdog timer.

Addendum:

As I run more units through the full calibration, I’m seeing about 1 in 3 where a polynomial fits the data better for the -15 to +25°C range:

si7051 reference temperature vs 1 sec WDT micros() : a different unit, but both clone boards from the same supplier

This is actually what I was expecting in the first place and I suspect all the fits would be 2nd order with a wider range of calibration temperatures. Also, this is the raw micros output – so you could make those coefficients more manageable by subtracting the lowest temperature reading from all those above. This would leave you with a numerical range of about 16000 ticks over 40°C, which takes less memory and is easier for calculations.

And just for fun I ran a trial on an unregulated system powered by 2xAA lithium batteries.  Two things happened: 1) the jitter/noise in the final Temperature readings more than doubled – to about 0.25°C  and 2) calibration was lost whenever the thermal mass of the batteries meant that the supply was actively changing – regardless of whether the mcu & reference had settled or not:

Red is the Si reference [left axis], Green is the calibration fit equation applied to the WDT micros() [left], and blue is the rail voltage supplied by 2xAA lithium batteries [right axis] (Note: low voltage spikes are caused by internal housekeeping events in the Nokia 256mb SD cards)

Addendum 2019-02-26

This morning I did a trial run which switched from micros() to timer1 overflows, using code from Nick Gammon’s Improved sketch using Timer 1. This increased the raw delta to almost 5000/°C, but unfortunately the width of the jitter also increased to about 500 counts. So I’m seeing somewhere near ±0.05°C equivalent of precision error – although my impression is that it’s been reduced somewhat because Timer1 only overflows 122 times per second, while the Timer0/micros had to overflow 100k times. So changing timers means less variability from the while-loop code execution. Next step will be to try driving the timer with the 32khz from the RTC…

Addendum 2019-02-27

So I re-jigged another one of Nicks counting routines which increments timer1 based on input from pin D5, using the WDT interrupt to set the interval. Then I enabled the 32.768 kHz output from a DS3231N and connected it to that pin. This pulse is dead-dog slow compared to the WDT oscillator, so I extended the interval out to 4 seconds.  This long-ish sample time only produced a delta of about 40 counts/°C.

Si7051 reference temp vs Timer1 counts of 32kHz output from DS3231N  (based on data selected from quiescent periods)

There wasn’t enough data to produce high resolution, but my thought was that since the DS3231N has temperature compensated frequency output, it eliminates the xtal as a variable in the question of where the jitter was coming from.  This approach also causes only 2-3 overflows on timer1, so the impact of code execution is further reduced.  Unfortunately, this experiment did not improve the noise situation:

DS3231 32khz clock tics vs 4sec WDT interval Raw reading jitter during a relatively quiescent period.

That’s about 8 counts of jitter in the raw, which produces readings about ±0.1C away from the central line.  That’s actually worse than what I saw with the Xtal vs WDT trials, but the increase might be an artifact of the pokey time-base.  The smoking gun now points squarely at variations in the WDT oscilator output as the source of the noise.

That’s kind of  annoying, suggesting it will take filtering/overhead to deliver better than about 0.1°C from this technique, even though higher resolution is obviously there in the deltas. The real trick will matching the right filter with all the other time lag / constraints in this system. Still, extra data that you can get from a code trick is handy, even if it sometimes it only serves to verify that one of your other sensors hasn’t gone squirrely.

—> just had a thought: oversampling & decimation eats noise like that for breakfast!
Just fired up a run taking 256 x 16ms samples (the shortest WDT interval allowed) with Timer1 back on the xtal clock. Now I just have to wait another 24 hours to see if it works…

Addendum 2019-02-28

OK: Data’s in from oversampling the WDT vs timer1 method.  I sum the the timer1 counts from 256 readings (on a 16msec WDT interval) and then >>4 to decimate. These repeated intervals took about 4 seconds of sampling time.

si7051 reference temperature vs 256x Oversampled Timer 1 reading on 16 msec WDT interval: Fit Equation

This produced 750 counts/°C for a potential resolution of 0.0013°, but as with the previous trials, the method falls down because the jitter is so much larger:

Variability on 256 Timer1 readings of 16msec WDT interval : During quiescent period

100 points of raw precision error brings the method back down to a modest ‘real’ resolution of only ±0.066°C at best.  The fact that this variability is so similar to the previous trials, and that oversampling did not improve it, tells me that the the problem is not noise – but rather the WDT oscillator is wandering around like a drunken sailor because of factors other than just temperature.  If that’s the case, there’s probably nothing I can throw at the problem to make it go away.

Several people pointed out that there is another way to measure temperature with some of the Atmel chips, so I decided to fire that up for a head-to-head trial against the WDT method.  Most people never use it because the default spec is ±10°C and it only generates 1 LSB/°C correlation to temperature for a default resolution of only 1°C.  Some previous efforts with this internal sensor produced output so bad it was used as a random seed generator.

But heck, if I’m going through the effort of calibration anyway, I might be able to level the playing field somewhat by oversampling those readings too:

si7051 reference temperature  °C  vs  4096 reading oversample of the internal diode: Fit equation

Even with 4096 samples from the ADC, this method only delivered ~75 counts /°C. But the internal diode is super linear, and the data is less variable than the WDT:

Variability from 4096 ADC readings of the internal reference diode : During quiescent period

Five counts of raw variability means the precision error is only  ±0.033°C  (again, this becomes our real resolution, regardless of the raw count delta) .  So even after bringing out the big guns to prop up the WDT, the internal reference diode blew the two-oscillator method out of the water on the very first try.

volatile uint16_t adc_irq_count;

ISR (ADC_vect)
{
adc_irq_count++;   //flag to track how many samples have been taken
}

internalDiodeReading=0;
adc_irq_count = 0;
unsigned long sum = 0;
unsigned int wADC;
ADMUX = (_BV(REFS1) | _BV(REFS0) | _BV(MUX3)); // Set the 1.1v aref and mux for diode
ADCSRA |= _BV(ADSC);  // 1st conversion to engage settings
delay(10);    // wait for ADC reference cap. to stabilize
do
noInterrupts ();
set_sleep_mode( SLEEP_MODE_ADC ); // Sleep Mode just to save power here
sleep_enable();
ADCSRA |= _BV(ADSC);  // Start the ADC
do{
interrupts();
sleep_cpu();      // Sleep (MUST be called immediately after interrupts()
noInterrupts(); // Disable interrupts so while(bit_is_set) check isn’t interrupted
} while (bit_is_set(ADCSRA,ADSC));  // back to sleep if conversion not done 
sleep_disable();  interrupts(); // Enable interrupts again
wADC = ADCW;  // Reading “ADCW” combines both ADCL & ADCH
sum += wADC;   // Add new reading to the total
} while (adc_irq_count<4096);  //sets how many times the ADC is read
ADCSRA &= ~ _BV( ADIE ); // No more ADC interrupts after this
internalDiodeReading=(sum >> 6); //decimation turns sum into an over-sampled reading

For now, I think I’ve kind of run out of ideas on how to make the WDT method more precise..? Oh well – It’s not the first time I’ve tried to re-invent the wheel and failed (and it probably won’t be my last….) At least it was a fun experiment, and who knows, perhaps I will find a better use for the technique in some other context.

I’ll spend some more time noodling around with the internal diode sensor and see if I can wrestle better performance out of it since it’s probably less vulnerable to aging drift than the WDT oscillator.  I’m still a bit cautious about oversampling the internal diode readings because the math depends on there being at least 1-2LSB’s of noise to work, and I already know the internal 1.1v ref is quite stable..?  I threw the sleep mode in there just to save power & reduce internal heating, but now that I think about it the oversampling might work better I let the chip heat a little over the sampling interval – substituting that as synthetic replacement for noise if there is not enough. The other benefit of the diode is that it will be resilient to a varying supply voltage, and we are currently  experimenting with more loggers running directly from batteries to see if it’s a viable way to extend the operating lifespan.

Addendum 2019-04-05

So I’ve kept a few units perking along with experiments oversampling the internal diode. And on most of those runs I’ve seen an error that’s quite different from the smoothly wandering WDT method. The internal diode readings have random & temporary jump-offsets of about 0.25°C:

si7051 (red) reference vs internal Diode (orange) over sampled 16384 reads then >>10 for heavy decimation.

These occur all all temperatures from +20 to -15C,  and I have yet to identify any pattern. The calculation usually returns to a very close fitting line after some period with the offset with about 50% of the overall time with a closely fit calibration. This is persistent through all the different oversampling intervals and stronger decimation does not remove the artifact – it’s in the raw data. No idea why…though I am wondering if perhaps the clocks entropy drift is somehow quantized?

Tutorial: Using an MS5803 pressure sensor with Arduino

With the DS18B20 temperature sensors in place, it was time to add the ‘depth’ part of the standard CDT suite.  After reading an introduction to the Fundamentals of Pressure Sensor Technology, I understood that most of the pressure sensors out there would not be suitable for depth sensing because they are gauge pressure sensors, which need to have a “low side” port vented to atmosphere (even if the port for this is hidden from view).

This is one of our earliest pressure loggers with a 3″ pvc housing. We now use 2″ PVC pipes (shown at the end of this post) which are much easier to to construct. For an exploded view of the new housing see the appendix at the end of the article in Sensors.

I needed an “absolute” pressure transducer, that has had it’s low side port sealed to a vacuum. I found a plethora of great altimiter projects in the rocketry & octocopter world, (with Kalman filtering!) but far fewer people doing underwater implementations in caves.  But there are a few DIY dive computer  projects out there, at various stages of completion, that use versions of the MS5541C & MS5803 pressure sensors from Measurement Specialties, or the MPX5700 series from Freescale. Victor Konshin had published some code support for the MS5803 sensors on Github, but his .cpp was accessing them in SPI mode, and I really wanted to stick with an I2C implementation as part of my quest for a system with truly interchangeable sensors. That lead me to the Open ROV project were they had integrated the 14 bar version of the MS5803 into their IMU sensor package. And they were using an I2C implementation. Excellent! I ordered a couple of 2 bar, and 5 bar sensors from Servoflo ($25 each +shipping..ouch!) , and a set of SMT breakout boards from Adafruit. A little bit of kitchen skillet reflow, and things were progressing well. (note: I mount these sensors now by hand, which is faster after you get the hang of it it)

Breaking out the I2C interface traces to drive my pressure sensor. My first "real" hack on the project so far.

My first “real” hack on the project so far. (Note: This material was posted in early 2014 , and the only reason I did this hack was that at the time I was running the Tinyduino stack directly from an unregulated 4.5 battery. On my more recent loggers, built with regulated 3.3v promini style boards, I can just use the Vcc line to power the MS5803 sensors, without all this bother…)

But as I dug further into the MS5803 spec sheets I discovered a slight complication. These sensors required a supply voltage between 1.8 – 3.6 V, and my unregulated Tinyduino stack, running on 3 AA’s, was swinging from 4.7 down to 2.8v.  I was going to need some sort of voltage regulator to bring the logic levels into a range that the senor’s could tolerate, with all the attendant power losses that implied… And then it dawned on me that this same problem must exist for the other I2c sensors already available on the Tinyduino platform. So perhaps I might be able to hack into those board connections and drive my pressure sensor? (instead of burning away months worth of power regulating the entire system) The Tiny Ambient Light Sensor shield carried the  TAOS TSL2572 which had nearly identical voltage and power requirements to my MS5803.

I used JB weld to provide support for those delicate solder connections.

I used JB weld to provide support for those delicate solder connections.

So their voltage regulator, and level shifter, could do all the work for me if I could lift those traces.  But that was going to be the most delicate soldering work I have ever attempted. And I won’t pull your leg, it was grim, very grim indeed. Excess heat from the iron  conducted across the board and melted the previous joints with each additional wire I added.  So while the sensors themselves came off easily with an drywall knife, it took two hours (of colorful language…) to lift the traces out to separate jumper wires. I immediately slapped on a generous amount  of JB weld, because the connections were so incredibly fragile. I produced a couple of these breakouts, because I have other sensors to test, and I face this same logic level/voltage problem on the I2C lines every time I power the unregulated Tiny duino’s from a computer USB port.

With a connection to the mcu sorted, it was time to look at the pressure sensor itself. Because I wanted the sensor potted as cleanly as possible, I put the resistor, capacitor, and connections below the breakout board when I translated the suggested connection pattern from the datasheets to this diagram:

The viewed from above, with only one jumper above the plane of the breakout board.

This is viewed from above, with only one jumper above the plane of the SOIC-8 breakout. I used a 100nF (104) decoupling cap. The PS pin (protocol select) jumps to VDD setting I2C mode, and a 10K pulls CSB high, to set the address to 0x76. Connecting CSB to GND would set the I2C address to 0x77 so you can potentially connect two MS5803 pressure sensors to the same bus.

And fortunately the solder connections are the same for the 5 bar, and the 2 bar versions:

I've learned not waste time making the solder joints "look pretty". If they work, I just leave them.

I’ve learned not waste time making the solder joints “look pretty”. If they work, I just leave them.

After testing that the sensors were actually working, I potted them into the housings using JB plastic weld putty, and Loctite E30CL:

The Loctite applicator gun is damned expensive, but it does give you ultra-fine control.

The Loctite applicator gun is expensive, but it gives you the ability to bring the epoxy right to the edge of the metal ring on the pressure sensor.

So that left only the script. The clearly written code by by Walt Holm  (on the Open ROV github) was designed around the 14 bar sensor; great for a DIY submersible, but not quite sensitive enough to detecting how a rainfall event affects an aquifer.  So I spent some time modifying their calculations to match those on the 2 Bar MS5803-02 datasheet :

// Calculate the actual Temperature (first-order computation)
TempDifference = (float)(AdcTemperature – ((long)CalConstant[5] * pow(2, 8)));
Temperature = (TempDifference * (float)CalConstant[6])/ pow(2, 23);
Temperature = Temperature + 2000; // temp in hundredths of a degree C

// Calculate the second-order offsets
if (Temperature < 2000.0) // Is temperature below or above 20.00 deg C?

{T2 = 3 * pow(TempDifference, 2) / pow(2, 31);
Off2 = 61 * pow((Temperature – 2000.0), 2);
Off2 = Off2 / pow(2, 4);
Sens2 = 2 * pow((Temperature – 2000.0), 2);}

else

{T2 = 0;
Off2 = 0;
Sens2 = 0;}

// Calculate the pressure parameters for 2 bar sensor
Offset = (float)CalConstant[2] * pow(2,17);
Offset = Offset + ((float)CalConstant[4] * TempDifference / pow(2, 6));
Sensitivity = (float)CalConstant[1] * pow(2, 16);
Sensitivity = Sensitivity + ((float)CalConstant[3] * TempDifference / pow(2, 7));

// Add second-order corrections
Offset = Offset – Off2;
Sensitivity = Sensitivity – Sens2;

// Calculate absolute pressure in bars
Pressure = (float)AdcPressure * Sensitivity / pow(2, 21);
Pressure = Pressure – Offset;
Pressure = Pressure / pow(2, 15);
Pressure = Pressure / 100; // Set output to millibars

The nice thing about this sensor is that it also delivers a high resolution temperature signal, so my stationary pressure logger does not need a second sensor for that.

A Reef census is co-deployed with the pressure sensor to ground truth this initial test.

A Reefnet Census Ultra is co-deployed with my pressure sensor to ground truth this initial run.

So that’s it, the unit went under water on March 22, 2014, and the current plan is to leave it there for about 4 months. This kind of long duration submersion is probably way out of spec for the epoxy, and for the pressure sensors flexible gel cap. But at least we potted the sensor board with a clear epoxy,  so it should be relatively easy to see how well everything stands up to the constant exposure. (I do wonder if I should have put a layer of silicone over top of the sensor like some of the dive computer manufacturers)

 

Addendum 2014-03-30

I keep finding rumors of a really cheap “uncompensated” pressure sensor out there on the net for about 5 bucks: the HopeRF HSF700-TQ.  But I have yet to find any for sale in quantities less than 1000 units.  If anyone finds a source for a small number of these guys, please post a link in the comments, and I will test them out.  The ten dollar MS5805-02BA might also be pressed into service for shallow deployments using its extended range, if one can seal the open port well enough with silicone. And if all of these fail due to the long duration of exposure, I will go up market to industrial sensors isolated in silicon oil , like the 86bsd, but I am sure they will cost an arm and a leg. 

Addendum 2014-04-15

Looks like Luke Miller has found the the float values used in the calculations from the ROV code generates significant errors. He has corrected them to integers and posted code on his github. Unfortunately one of the glitches he found was at 22.5°C, right around the temperature of the water my units are deployed in. I won’t know for some months how this affects my prototypes. With my so many sensors hanging off of my units, I don’t actually have enough free ram left for his “long int” calculations, so I am just logging the raw data for processing later.

Addendum 2014-09-10

The unit in the picture above survived till we replaced that sensor with a 5-Bar unit on Aug 25th. That’s five months under water for a sensor that is only rated in the spec sheets for a couple of hours of submersion. I still have to pull the barometric signal out of the combined” readings, but on first bounce, the data looks good (mirroring the record from the Reefnet Sensus Ultra)  Since the 2-Bar sensor was still running, it was redeployed in Rio Secreto Cave (above the water table) on 2014-09-03. It will be interesting to see just how long one of these little sensors will last.

Addendum 2014-12-18

The 2Bar unit (in the photo above) delivered several months of beautiful barometric data from it’s “dry” cave deployment, and was redeployed for a second underwater stint in Dec 2014. The 5Bar unit survived 4 months of salt water exposure, but we only got a month of data from it because an epoxy failure on the temperature sensor drained the batteries instantly.  After a makeshift repair in the field, it has now been re-deployed as a surface pressure unit. The good news is that we had the 5Bar sensor under a layer of QSil 216 Clear Liquid Silicone, and the pressure readings look normal compared to the naked 2bar sensor it replaced. So this will become part of my standard treatment for underwater pressure sensors to give them an extra layer of protection.

[NOTE: DO NOT COAT YOUR SENSORS LIKE THIS! this silicone rubber coating failed dramatically later – it was only the stable thermal environment of the caves that made it seem like it was working initially and the silicone also seemed to change its physical volume with long exposure to salt water.  I’m leaving the original material in place on this blog as it’s an honest record of the kinds of mistakes I worked through during this projects development.]

Addendum 2015-01-16

I know the MCP9808 is a bit redundant here, but at only $5, it's nice to get to ±0.25C accuracy. The MS5803's are only rated to ±2.5ºC

I know the MCP9808 is a little redundant here, but at $5 each, it’s nice to reach ±0.25ºC accuracy. The MS5803’s are only rated to ±2.5ºC, and you can really see that in the data when you compare the two. The low profile 5050 LED still has good visibility with a 50K Ω limiter on the common ground line. Test your sensors & led well before you pour the epoxy! (Note: the 9808 temp sensor & LED pictured here failed after about 8 months at 10m. I suspect this was due to the epoxy flexing under pressure at depth because of the large exposed surface area. The MS5803 was still working fine.)

Just thought I would post an update on how I am mounting the current crop of pressure sensors.  My new underwater housing design had less surface area so I combined the pressure sensor, the temperature sensor, and the indicator LED into a single well which gives me the flexibility to use larger breakout boards. That’s allot of surface area to expose at depth, so I expect there will some flexing forces. At this point I have enough confidence  in the Loctite ECL30 to pot everything together, even though my open ocean tests have seen significant yellowing. The bio-fouling is pretty intense out there, so it could just be critters chewing on the epoxy compound. Hopefully a surface layer of Qsil will protect this new batch from that fate.

Addendum 2015-03-02

Just put a 4-5mm coating of Qsil over a few MS5803’s in this new single-ring mount, and on the bench the coating seems to reduce the pressure reading by between 10-50 mbar, as compared to the readings I get from the sensors that are uncoated. Given that these sensors are ±2.5% to begin with, the worst ones have about doubled their error.  I don’t know if this will be constant through the depth range, or if the offset will change with temperature, but if it means that I can rely on the sensor operating for one full year under water, I will live with it.

Addendum 2015-04-06 :  Qsil Silicone Coating idea FAILS

Just returned from a bit of fieldwork where we had re-purposed a pressure sensor from underwater work to the surface. That sensor had Qsil silicone on it, and while it delivered a beautiful record in the the flooded caves, where temperatures vary by less than a degree, it went completely bananas out in the tropical sun where temps varied by 20°C or more per day. I suspect that the silicone was expanding and contracting with temperature, and this caused physical pressure on the sensor that completely swamped the barometric pressure signal. 

Addendum 2016-02-01

Holding MS5803 sensor in place for soldering

Use the smallest with zip tie you can find.

Since these are SMD sensors, mounting them can be a bit of a pain so I though would add a few comments about getting them ready. I find that holding the sensor in place with a zip tie around the SOIC-8 breakout makes a huge difference.  Also, I find it easier to use the standard sharp tip on my Hakko, rather than a fine point  which never seem to transfer the heat as well.

 

SolderingMS5803-2

I also use a wood block to steady my hand during the smd scale soldering.

I plant the point of the iron into the small vertical grooves on the side of the sensor.  I then apply a tiny bead of solder to the tip of the iron, which usually ends up sitting on top, then I roll the iron between my fingers to bring this the bead around to make contact with the pads on the board. So far this technique has been working fairly well, and though the sensors do get pretty warm they have all survived so far.  If you get bridging, you can usually flick away the excess solder if you hold the sensor so that the bridged pads are pointing downwards when you re-heat them.

 

Stages of MS5803 mounting procedure

After mounting the sensor to the breakout board, I think of the rest of the job in two stages: step one is the innermost pair (which are flipped horizontally relative to each other) , and step two by the outermost pair where I attach the incoming I2C wires.  Here SCL is yellow, and SDA is white.  In this configuration CSB is pulled up by that resistor, giving you an I2C address of 0x76.  If you wanted a 0x77 buss address, you would leave out the resistor and attach the now empty hole immediately beside the black wire to that GND line.

Sometimes you need to heat all of the contacts on the side of the sensor at the same time with the flat of the iron to re-flow any bridges that have occurred underneath the sensor itself. If your sensor does not work, or gives you the wrong I2C address, its probably because of this hidden bridging problem.

back side connection ms5803

Adafruit still makes the nicest boards to work with, but the cheap eBay SOIC8 breakouts (like the one pictured above)  have also worked fine and they let me mount the board in smaller epoxy wells.  Leave the shared leg of the pullup resistor/capacitor long enough to jump over to Vcc on the top side of the board .

Addendum 2016-03-08

Have the next generation of pressure sensors running burn tests to see what offsets have been induced by contraction of the epoxy.  I’ve been experimenting with different mounting styles, to see if that plays a part too:

These housings are still open, as it takes a good two weeks for the pvc solvent to clear out...

The housings are left open during the bookshelf runs as it takes a couple of weeks for the pvc solvent to completely clear out, and who knows what effect that would have on the circuits. (Note: for more details on how I built these loggers and housings, you can download the paper from Sensors )

The MS5803’s auto-sleep brilliantly, so several of these loggers make it down to ~0.085 mA between readings, and most of that is due to the SD card.  I’m still using E-30Cl, but wondering if other potting compounds might be better? There just isn’t much selection out there in if you can only buy small quantities. The E30 flexed enough on deeper deployments that the bowing eventually killed off the 5050 LEDs. ( standard 5mm encapsulated LEDs are a better choice ) And I saw some suspicious trends in the temp readings from the MCP9808 under that epoxy too…

Addendum 2016-03-09

Just a quick snapshot from the run test pictured above:

PressureSensorTest_20160309

These are just quick first-order calculations (in a room above 20°C). Apparently no effect from the different mounting configurations, but by comparison to the local weather.gov records, the whole set is reading low by about 20 mbar. This agrees with the offsets I’ve seen in other MS5803 builds, but I am kicking myself now for not testing this set of sensors more thoroughly before I mounted them. Will do that properly on the next batch.

Addendum 2016-09-22

The inside of an MS5803, after the white silicone gel covering the sensor was knocked off by accident.

You can change the I2C address on these sensors to either 0x76 or 0x77 by connecting the CSB pin to either VDD or GND. This lets you connect two pressure sensors to the same logger, and I’ve been having great success putting that second sensor on cables as long as 25 m. This opens up a host of possibilities for environmental monitoring especially for things like tide-gauge applications, where the logger can be kept above water for easier servicing. It’s worth noting that on a couple of deployments, we’ve seen data loss because the senor spontaneously switched it’s bus address AFTER several months of running while potted in epoxy. My still unproven assumption is that somehow moisture penetrated the epoxy, and either oxidized a weak solder joint, or provided some other current path that caused the sensor to switch over.

Addendum 2017-04-30

Hypersaline environments will chew through the white cap in about 3 months.

Given what a pain these little sensors are to mount, it’s been interesting to see the price of these pressure sensor breakout modules falling over time. This spring the 1 & 14 bar versions fell below $24 on eBay.  Of course they could be fake or defective in some way, but I’m probably going to order a few GY-MS5803’s to see how they compare to the real thing.

Addendum 2020-02-29:  Mounting Pressure Sensors under Oil

When exposed to freshwater environments & deployed at less than 10m depth, a typical MS5803 gives us 2-3 years of data before it expires. However we often do deployments in ocean estuaries where wave energy & high salt concentrations shorten the operating life to a year or less. So now we mount them on replaceable dongles, so that it’s easy to replace an expired sensor in the field. I described that sensor mounting system in the 2017 build videos:

https://youtu.be/EKNNDPGr1Aw&t=2590s

Media isolated pressure sensors are common in the industrial market, but they are quite expensive.  So we’ve also used these dongles to protect our pressure sensors under a layer of  oil.  I’ve seen this done by the ROV crowd using comercial isolation membranes, or IV drip bags as flexible bladders, but like most of our approaches, the goal here was to develop a method I could retro-fit to the units already in the field, and repair using materials from the local grocery store:

The sensor is already potted into a NIBCO 1/2″ x 3/4″ Male Pex Adapter, to which we will mate a NIBCO 3/4″ Female Swivel Adapter.

Since the balloon in this case is too large, I simply tie & cut it down to size.  You can also cut your membrane from gloves or use small-size nitrile finger cots

Remove the O-ring from the swivel adapter stem and insert the ‘neck’ of the balloon.

Pull the balloon through till the knot-end becomes visible.

Pull the balloon over the rim on the other side of the pex adapter.

Place the O-ring over the  balloon, and cut away the rolled end material.

Now the threaded swivel ring will not bind on rubber when it gets tightened. Note the knot is just visible at the stem

Fill the mounted sensor ‘cup’ with silicone oil or mineral oil. You could also use lubricants produced by o-ring manufacturers that do not degrade rubbers over time.

 

 

 

Gently push the balloon back out of the stem so that there is extra material in direct contact with oil. You don’t want the membrane stretched tight when you bring the parts together.

Then place the swivel stem on the sensor cup with enough extra membrate so it can moves freely inside the protective stem.

. . . and tighten down with the threaded ring to create a water-tight seal.

 

 

After assembly the membrane material should be quite loose to accommodate pressure changes & any thermal expansion of the oil.

Small trapped air bubbles can cause serious problems in dynamic hydraulic systems, but I don’t think the volume of air in the balloon matters as much when you are only taking one reading every 5-15 minutes.  If you do this oil mount with other common pressure sensors like the BMP280 then you are pretty much guaranteed to have some kind bubble inside the sensor can, but so far I have not seen any delta when compared to ‘naked’ sensors of the same type on parallel test runs. It’s also worth noting that depending on the brand, finger cots can be quite thin, and in those cases I sometimes use two layers for a more robust membrane. Put a drop or two of oil between the joined surfaces of the membranes with a cotton swab to prevent binding – they must slide freely against each other and inside the pex stem.

Yes, there is a pressure sensor buried in there! We got data for ~3.5 months before the worms covered it completely. In these conditions a larger IV bag is a better choice than the small oil reservoir I’ve described above. Simply attach that flexible oil-filled bladder directly to the stem of a 1/2″pex x 3/4″swivel connector with a crimp ring.

It’s also worth adding a comment here on the quality of the oil that you use. For example, silicone oil can be used on o-rings, and sources like Parker O-ring handbook describe this as “safe all rubber polymers”. But it’s often hard to find pure silicone oil and hardware store versions often use a carrier or thinner (like kerosene) that will damage or even outright dissolve rubbers on contact. And although we’ve used the mineral oil/balloon combination for short periods, nitrile is a better option in terms of longevity. With nitrile’s lower flexibility, you have to be careful when fitting cots over the o-ring end of the connector tube because it leaves leaky folds if theres too much extra material, or tears easily if it’s too small.  In all cases the flexible material should fit into the stems 3/4 inch diameter without introducing any tension in the membrane when you assemble the connector parts. It must be free to move back & forth in response to external pressure changes.

Our latest housing design with direct connections to make sensor replacement easier

Also note if you have to build one of the larger white PVC sensor cups shown in the video (because your sensor is mounted on a large breakout board) then I’ve found that clear silica gel beads make a reasonable filler material under the breakout board BEFORE you pour the potting epoxy into the sensor well.  This reduces the amount epoxy needed so that there is less volume contraction when it cures, but a low viscosity epoxy like E30CL still flows well enough around the beads and allows the air bubbles to escape.  With wide diameter sensor cups, you will probably have to switch over to something like a polyurethane condom as the barrier membrane.

Addendum 2021-10-12:

Just an update on how I now prepare these raw sensors: 30 AWG breakout wires attach directly to the MS5803 pressure sensor with CSB to GND (setting sensor address to 0x77) & PS bridged to Vcc (setting I2C mode) via the 104 ceramic decoupling capacitor legs. In these photos SDA is white & SCL is yellow.

The wire connections are then embedded in epoxy inside our standard sensor dongles.

Addendum 2024-09-15:

We posted a new tutorial on: How to Normalize a Set of Pressure Sensors. There are always offsets between pressure sensors from different manufacturers, and normalization lets you use them together in the same sensor array.