Category Archives: Developing a TEMP ℃ chain

A string of underwater temperatures using one-wire DS18b20s

How to make ‘High-Resolution’ Sensor Readings with DIGITAL I/O pins

AN685 figure 8: The RC rise-time response of the circuit allows microcontroller timers to be used to determine the relative resistance of the NTC element.

For more than a year we’ve been oversampling the Arduino’s humble ADC to get >16bit ambient temperature readings from a 20¢ thermistor. That pin-toggling method is simple and delivers solid results, but it requires the main CPU to stay awake long enough to capture multiple readings for the decimation step. (~200 miliseconds @ 250 kHz ADC clock) While that only burns 100 mAs per day with a typical 15 minute sample interval, a read through Thermistors in Single Supply Temperature Sensing Circuits hinted that I could ditch the ADC and read those sensors with a pin-interrupt method that would let me sleep the cpu during the process. An additional benefit is that the sensors would draw no power unless they were actively being read.

Given how easy it is drop a trend-line on your data, I’ve never understood why engineers dislike non-linear sensors so much. If you leave out the linearizing resistor you end up with this simple relationship.

The resolution of time based methods depends on the speed of the clock, and Timer1 can be set with a prescalar = 1; ticking in step with the 8 mHz oscillator on our Pro Mini based loggers. The input capture unit can save Timer1’s counter value as soon as the voltage on pin D8 passes a high/low threshold. This under appreciated feature of the 328p is faster than typical interrupt handling, and people often use it to measure the time between rising edges of two inputs to determine the pulse width/frequency of incoming signals.

You are not limited to one sensor here – you can line them up like ducks on as many driver pins as you have available.  As long as they share that common connection you just read each one sequentially with the other pins in input mode. Since they are all being compared to the same reference, you’ll see better cohort consistency than you would by using multiple series resistors.

Using 328p timers is described in detail at Nick Gammons Timers & Counters page, and I realized that I could tweak the method from ‘Timing an interval using the input capture unit’ (Reply #12) so that it recorded only the initial rise of an RC circuit.  This is essentially the same idea as his capacitance measuring method except that I’m using D8’s external pin change threshold rather than a level set through the comparator. That should be somewhere around 0.6*Vcc, but the actual level doesn’t matter so long as it’s consistent between the two consecutive reference & sensor readings. It’s also worth noting here that the method  doesn’t require an ICU peripheral – any pin that supports a rising/ falling interrupt can be used. (see addendum for details) It’s just that the ICU makes the method more precise which is important with small capacitor values. (Note that AVRs have a Schmitt triggers on  the digital GPIO pins. This is not necessarily true for other digital chips. For pins without a Schmitt trigger, this method may not give consistent results)

Using Nicks code as my guide, here is how I setup the Timer1 with ICU:

float referencePullupResistance=10351.6;   // a 1% metfilm measured with a DVM
volatile boolean triggered;
volatile unsigned long overflowCount;

ISR (TIMER1_OVF_vect) {  // triggers when T1 overflows: every 65536 system clock ticks
overflowCount++;
}

 

ISR (TIMER1_CAPT_vect) {     // transfers Timer1 when D8 reaches the threshold
sleep_disable();
unsigned int timer1CounterValue = ICR1;       // Input Capture register (datasheet p117)
unsigned long overflowCopy = overflowCount;

if ((TIFR1 & bit (TOV1)) && timer1CounterValue < 256){    // 256 is an arbitrary low value
overflowCopy++;    // if “just missed” an overflow
}

if (triggered){ return; }  // multiple trigger error catch

finishTime = (overflowCopy << 16) + timer1CounterValue;
triggered = true;
TIMSK1 = 0;  // all 4 interrupts controlled by this register are disabled by setting zero
}

 

void prepareForInterrupts() {
noInterrupts ();
triggered = false;                   // reset for the  do{ … }while(!triggered);  loop
overflowCount = 0;               // reset overflow counter
TCCR1A = 0; TCCR1B = 0;     // reset the two (16-bit) Timer 1 registers
TCNT1 = 0;                             // we are not preloading the timer for match/compare
bitSet(TCCR1B,CS10);           // set prescaler to 1x system clock (F_CPU)
bitSet(TCCR1B,ICES1);          // Input Capture Edge Select ICES1: =1 for rising edge
// or use bitClear(TCCR1B,ICES1); to record falling edge

// Clearing Timer/Counter Interrupt Flag Register  bits  by writing 1
bitSet(TIFR1,ICF1);         // Input Capture Flag 1
bitSet(TIFR1,TOV1);       // Timer/Counter Overflow Flag

bitSet(TIMSK1,TOIE1);   // interrupt on Timer 1 overflow
bitSet(TIMSK1,ICIE1);    // Enable input capture unit
interrupts ();
}

With the interrupt vectors ready, take the first reading with pins D8 & D9 in INPUT mode and D7 HIGH. This charges the capacitor through the reference resistor:

//========== read 10k reference resistor on D7 ===========

power_timer0_disable();    // otherwise Timer0 generates interrupts every 1us

pinMode(7, INPUT);digitalWrite(7,LOW);     // our reference
pinMode(9, INPUT);digitalWrite(9,LOW);     // the thermistor
pinMode(8,OUTPUT);digitalWrite(8,LOW);  // ground & drain the cap through 300Ω
LowPower.powerDown(SLEEP_30MS, ADC_OFF, BOD_ON);  //overkill: 5T is only 0.15ms

pinMode(8,INPUT);digitalWrite(8, LOW);    // Now pin D8 is listening

set_sleep_mode (SLEEP_MODE_IDLE);  // leaves Timer1 running
prepareForInterrupts ();
noInterrupts ();
sleep_enable();
DDRD |= (1 << DDD7);          // Pin D7 to OUTPUT
PORTD |= (1 << PORTD7);    // Pin D7 HIGH -> charging the cap through 10k ref

do{
interrupts ();
sleep_cpu ();                //sleep until D8 reaches the threshold voltage
noInterrupts ();
}while(!triggered);      //trapped here till TIMER1_CAPT_vect changes value of triggered

sleep_disable();           // redundant here but belt&suspenders right?
interrupts ();
unsigned long elapsedTimeReff=finishTime;   // this is the reference reading

Now repeat the process a second time with D7 & D8 in INPUT mode, and D9 HIGH to charge the capacitor through the NTC thermistor:

//==========read the NTC thermistor on D9 ===========

pinMode(7, INPUT);digitalWrite(7,LOW);     // our reference
pinMode(9, INPUT);digitalWrite(9,LOW);     // the thermistor
pinMode(8,OUTPUT);digitalWrite(8,LOW);  // ground & drain the cap through 300Ω
LowPower.powerDown(SLEEP_30MS, ADC_OFF, BOD_ON);

pinMode(8,INPUT);digitalWrite(8, LOW);    // Now pin D8 is listening

set_sleep_mode (SLEEP_MODE_IDLE);
prepareForInterrupts ();
noInterrupts ();
sleep_enable();
DDRB |= (1 << DDB1);          // Pin D9 to OUTPUT
PORTB |= (1 << PORTB1);    // set D9 HIGH -> charging through 10k NTC thermistor

do{
interrupts ();
sleep_cpu ();
noInterrupts ();
}while(!triggered);

sleep_disable();
interrupts ();
unsigned long elapsedTimeSensor=finishTime;   //this is your sensor reading

Now you can determine the resistance of the NTC thermistor via the ratio:

unsigned long resistanceof10kNTC=
(elapsedTimeSensor * (unsigned long) referencePullupResistance) / elapsedTimeReff;

pinMode(9, INPUT);digitalWrite(9,LOW);pinMode(7, INPUT);digitalWrite(7,LOW);
pinMode(8,OUTPUT);digitalWrite(8,LOW);  //discharge the capacitor when you are done

The integrating capacitor does a fantastic job of smoothing the readings and getting the resistance directly eliminates 1/2 of the calculations you’d normally do. To figure out your thermistor constants, you need to know the resistance at three different temperatures. These should be evenly spaced and at least 10 degrees apart with the idea that your calibration covers the range you expect to use the sensor for. I usually put loggers in the refrigerator & freezer to get points with enough separation from normal room temp. – with the thermistor taped to the surface of a si7051.  Then plug those values into the thermistor calculator provided by Stanford Research Systems.

Just for comparison I ran a few head-to-head trials against my older dithering/oversampling method:

°Celcius vs time [1 minute interval]  si7051 reference [±0.01°C resolution] vs. Pin-toggle Oversampled 5k NTC vs. ICUtiming 10k NTC.   I’ve artificially separated these curves for visual comparison, and the 5K was not in direct contact with the si7051 reference, while the 10k NTC was taped to the surface of chip – so some of the 5ks offset is an artifact. The Timer1 ratios delver better resolution than 16-bit (equivalent) oversampling in 1/10 the time.

There are a couple of things to keep in mind with this method:

si7051 reference temp (C) vs 10k NTC temp with with a ceramic 106 capacitor. If your Ulong calculations overflow at low temperatures like the graph above, switch to doing the division before the multiplication or use larger ‘long-long’ variables. Also keep in mind that the Arduino will default to 16bit calculations unless you set/cast everything to unsigned long.  Or you could make you life easy and save the raw elapsedTimeReff & elapsedTimeSensor values to your data file and do the calculations later in Excel.

1) Because my Timer1 numbers were small with a 104 cap I did the multiplication before the division. But keep in mind that this method can easily generate values that over-run your variables during calculationUlong MAX is 4,294,967,295  so the elapsedTimeSensor reading must be below 429,496 or the multiplication overflows with a 10,000 ohm reference. Dividing that by our 8mHz clock gives about 50 milliseconds. The pin interrupt threshold is reached after about one rise-time constant so you can use an RC rise time calculator to figure out your capacitors upper size limit. But keep in mind that’s one RC at the maximum resistance you expect from your NTC – that’s the resistance at the coldest temperature you expect to measure as opposed to its nominal rating.  (But it’s kind of a chicken&egg thing with an unknown thermistor right?  See if you can find a manufacturers table of values on the web, or simply try a 0.1uF and see if it works).  Once you have some constants in hand Ametherm’s Steinhart & Hart page lets you check the actual temperature at which your particular therm will reach a given resistance.  Variable over-runs are easy to spot because the problems appear & then disappear whenever some temperature threshold is crossed. I tried to get around this on a few large-capacitor test runs by casting everything to float variables, but that lead to other calculation errors.

(Note: Arduino supports 64-bit “long long” int64_t & uint64_t integers for large number calculations but be careful –  they will gobble up lots of program memory space. typically adding 1-3k to a given compile size. Also Arduino’s printing function can not handle 64 bit numbers, so you have to slice them into smaller pieces before using any .print functions

2) This method works with any kind of resistive sensor, but if you have one that goes below ~200 ohms (like a photoresistor in full sunlight) then the capacitor charging could draw more power than you can safely supply from the D9 pin.  In those cases add a ~300Ω resistor in series with your sensor to limit the current, and subtract that value from the output of the final calculation. At higher currents you’ll also have voltage drop accross the mosfets controlling the I/O pins (~40Ω on a 3.3v Pro Mini), so make sure the calibration includes the ends of your range.

There are a host of things that might affect the readings because every component has a temperature, aging, and other coefficients, but for the accuracy level I’m after many of those factors are handled by the ratio-metric nature of the method. It’s almost independent of value of the capacitor, so you can get away with a cheap ceramic even though it’s value changes dramatically with temperature and age. (I’d still use at least an X7R or, a plastic film cap for measuring colder temps) Last year I found that the main system clock was variable to the tune of about 0.05% over a 40°C range, but that shouldn’t matter if the reference and the sensor readings are taken close to each other. Ditto for variations in your supply. None of it matters unless it affects the system ‘between’ the two measurements.

The significant digits from your final calculation depend on the RC rise time, so switching to 100k thermistors increases the precision. Processors with higher clock speeds should also boost your resolution.  You can shut down more peripherals with the PRR to use less power during the readings as long as you leave Timer1 running.  I’ve also found that cycling the capacitor through an initial charge/discharge cycle (via the 300Ω on D8) after long sleep intervals improved the consistency of the readings. That capacitor shakedown might be an artifact of the ceramics I was using but reading your reference twice at the start of each run might be a good way of eliminating errors that arise from the pre-existing conditions, though you’d also be heating up that resistor so it’s a bit of a trade-off.

Will this method replace our pin-toggled oversampling?  Perhaps not for something as simple as a thermistor since that method has already proven itself in the real world, and I don’t really have anything better to do with A6 & A7. And oversampling still has the advantage of being simultaneously available on all the analog inputs, while the ICU is a more limited resource.  Given the high resolution that’s potentially available with the Timer1/ICU combination, I might save this method for sensors with less dynamic range.  I already have some ideas there and, of course, lots more testing to do before I figure out if there are other problems related to this new method.  I still haven’t determined what the long-term precision is with our Pro Mini clones, and the WDT experiment taught me to be cautious about counting those chickens.


Addendum:   Using the processors built in pull-up resistors

After a few successful runs, I realized that I could use the internal pull-up resistor on D8 as my reference; bringing it down to only three components. Measuring the resistance of the internal pull-up is simply a matter of enabling it and then measuring the current that flows between that pin and ground. I ran several tests, and the Pro Mini’s the internal pullups were all close to 36,000 ohms, so my reference would become 36k + the 300Ω resistor needed on D8 to safely discharge of the capacitor between cycles. I just have to set the port manipulation differently before the central do-while loop:

PORTB |= (1 << PORTB0);     //  enable pullup on D8 to start charging the capacitor

A comparison of the two approaches:

°Celcius vs time: (Post-calibration, 0.1uF X7R ceramic cap)  si7051 reference [±0.01°C] vs. thermistor ICU ratio w 10k reference resistor charging the capacitor vs. the same thermistor reading calibrated using the rise time through pin D8’s internal pull-up resistor. The reported ‘resistance’ value of the NTC themistor was more than 1k different between the two methods, with the 10k met-film reference providing values closer to the rated spec. However the Steinhart-Hart equation constants from the calibration were also quite different, so the net result was indistinguishable between the two references in the room-temperatures range.

The internal pull-up probably isn’t a real resistor. More likely its a very thin semiconductor channel that simply acts like one, and I found the base-line temperature variance to be about 200 ohms over my 40°C calibration range. And because you are charging the capacitor through the reference and through the thermistor, the heat that generates necessarily changes those values during the process.  However when you run a calibration, those factors get absorbed into the S&H coefficients provided you let the system equilibrate during the run.

As might be expected, all chip operation time affects the resistance of the internal pull-up, so the execution pattern of the code used for your calibration must exactly match your deployment code or the calibration constants will give you an offset error proportional to the variance of the internal pull-up caused by the processors run-time. Discharging the capacitor through D8, also generates some internal heating so those (~30-60ms) sleep intervals also have to be consistent.  In data logging applications you can read that reference immediately after a long cool-down period of processor sleep and use the PRR to reduce self-heating while the sample is taken.  Code consistency is always important with time based calibrations no matter how stable your reference is.

Another issue was lag because that pull-up is embedded with the rest of the chip in a lump of epoxy. This was a pretty small, with a maximum effect less than ±0.02°C/minute and I didn’t see that until temperatures fell below -5 Celsius.  Still, for situations where temperature is changing quickly I’d stick with the external reference, and physically attach it to the thermistor so they stay in sync.


Addendum:   What if your processor doesn’t have an Input Capture Unit?

With a 10k / 0.1uF combination, I was seeing Timer1 counts of about 5600 which is pretty close to one 63.2% R * C time constant for the pair.  That combination limits you to 4 significant figures and takes about 2x 0.7msec per reading on average.  Bumping the integrating capacitor up to 1uF (ceramic 105) multiplies your time by a factor of 10 – for another significant figure and an average of ~15msec per paired set readings.  Alternatively, a 1uF or greater capacitor allows you to record the rise times with micros()  (which counts 8x slower than timer1) and still get acceptable results. (even with the extra interrupts that leaving Timer0 running causes…) So the rise-time method could be implemented on processors that lack an input capture unit – provided that they have Schmitt triggers on the digital inputs like the AVR which registers a cmos high transition at ~0.6 * Vcc.

void d3isr() {
triggered = true;
}

pinMode(7, INPUT);digitalWrite(7,LOW);     // reference resistor
pinMode(9, INPUT);digitalWrite(9,LOW);     // the thermistor
pinMode(8,OUTPUT);digitalWrite(8,LOW);  // ground & drain the cap through 300Ω
LowPower.powerDown(SLEEP_30MS, ADC_OFF, BOD_ON);  // 5T is now 1.5ms w 10k

pinMode(8,INPUT);digitalWrite(8, LOW);    // Now pin D8 is listening

triggered = false;
set_sleep_mode (SLEEP_MODE_IDLE);  // leave micros timer0 running for micros()
unsigned long startTime = micros();
noInterrupts ();
attachInterrupt(1, d3isr, RISING); // using pin D3 here instead of D8
DDRD |= (1 << DDD7);          // Pin D7 to OUTPUT
PORTD |= (1 << PORTD7);    // Pin D7 HIGH -> charging the cap through 10k ref
sleep_enable();

do{
interrupts ();
sleep_cpu ();
noInterrupts ();
}while(!triggered);

sleep_disable();
detachInterrupt(1);
interrupts ();
unsigned long D7riseTime = micros()-startTime;

Then repeat the pattern shown earlier for the thermistor reading & calculation. I’d probably bump it up to a ceramic 106 for the micros method just for some extra wiggle room. The method doesn’t really care what value of capacitor you use, but you have to leave more time for the discharge step as the size of your capacitor increases.

Field Report 2016-03-27: Progress on the 1-Wire DS18B20 Temperature Strings

Happy to report the successful deployment of three more temperature strings:

 

and I think it’s fair to say the first two protoypes have almost run their course:

Logger/Sensors Time/Max.Depth Comment
#45  (19 x 25cm)
[1st build]
201503-08
(14m)
Full data from Temp chain, Qsil problem on pressure sensor
         (19 x 25cm) 201508-12
(6m)
Wire break on 1st segment before deployment: total read failure on temps, pressure data OK
         (11 x 25cm) 201512-1603
(7m)
Full record including pressure. Segment wire broke during retrieval. Brought home for refurb.
#46  (20 x 50cm)
[2nd build]
201503-08
(16m)
Full temp record. Pressure record problem.
         (20 x 50cm) 201508-12
(24m)
Full temp record. Pressure OK after Qsil removed.
        (9 x 50cm) 201512-1603
(24m)
3rd segment wire break during deployment dive. Nine sensors report OK for duration. Pressure OK. Failed segment removed & unit re-deployed.
#78  (24 x 100cm)
[3rd]
201512-1603
(16m)
Complete data record. Unit redeployed.
#79  (24 x 25cm,  & 10m extension)
[4th]
201512-1603
(16m)
Wire break during deployment dive. Full logs saved but no data. Failed segment removed & unit re-deployed w 18 nodes
These internal wire failures did not compromise the integrity of the outer jacket.

Fortunately none of these internal wire failures compromised the integrity of the outer jacket. These are only 7-strand wires, and I will be hunting for more flexible 19 strand replacements.

While it’s hard to see all that work deliver only a few successful deployments, I’m happy to note the failures were all at physical pinch points in the cable, with no other apparent problems on the housings or electronics.  And even when the wires break, the logger itself keeps chugging along: saving logs full of 1360 & -1 read errors.  Sensor problems like this often take a whole logger down, but the MS5803’s delivered data through every deployment, so failures on the one wire bus seem to be isolated from the rest of the sensors on the logger.

RIMG8849

Though it was always part of the plan, actually pulling off segment swaps in real time was pretty cool…

For existing builds, I reenforced the weak points with mastic tape

For the units in the field, I’ve re-enforced the weak points with mastic tape & cable ties to limit bending.

This is good news for me because physical problems are generally the easiest ones to fix…

Up to this point I’ve been using soft silicone jacket cable which is lovely to work with, but I will have to build the next set from sterner stuff.

Of course that means we might have to develop a new set of deployment procedures,  because handling 24m of stiff cable could be a challenge.  These deployments have already been some of the toughest dives on the project, especially when you get into the low-visibility hydrogen sulfide soup that Trish is so fond of…

 

The second generation builds also used less power than the first:

Much better power performance on the second generation of temp strings.

The battery curve is still a bit crunchy, but it looks like it might settle to about 100 mV drop per month at a 15 minute interval on 2x3AA’s.  That’s nearly the same as the first two loggers which had twice as many cells.  One of the things you notice about cheap DS18b20’s from ebay, is that they can draw dramatically different sleep currents.  This may be compounded by the fairly aggressive 2k2 pullup I use on the bus.

 


Spotted in Tulum:

Neighbours

Just finding a problem doesn’t always tell you how to fix it…

DS18B20 1-Wire calibration with Arduino: Finally nailed it!

Dry well insert for DS18b20 calibrations

The Thermapen tip fits reasonably well into a 9/64″ hole, and that size also makes a good pilot hole for the 1/4 inch DS18 wells. The travel on my little bench-top drill press is only two inches, but that turned out to be the perfect depth.  The whole job took about 4 hours, with less colorful language than I was expecting.

The high residuals I was seeing up at 40°C, and the dubious waterproofing on those cheap DS18b20‘s convinced me that I needed a dry well for the calibrations. Since lead has a thermal conductivity 100x better than water, my first though was to populate a tin can with short lengths of copper pipe and fill the spaces in between from my solder pot.  But I was unable to find exactly the right internal diameter of piping for the tight fit I wanted around those sensors, and the charts indicated that aluminum would give me 1/2 an order of magnitude more thermal distribution.  So despite the fact that I had no experience working metal like this, I ordered a 4″ piece of lathe bar stock from eBay (~$15).

 

 

CirculationPump

Go to a 530GPH pump if you have a larger water bath. The aluminum and stainless steel contact point suffered a bit of galvanic corrosion at first, but I separated them with a plastic plate and it seemed fine after that.

It took me a while figuring out how to lubricate the bits: basically you can never have too much cutting fluid at small diameters, but you can use the big 1/4″ bits ‘dry’ once the pilot holes are in. Just lift the bits frequently to clear the shavings so things don’t heat up and bind. And yes, new high-temp drill bits will cost you more than the aluminum rod – but it’s worth it. The 3″ diameter rod gave me enough room for 38 DS18’s with a few different locations for the Reference Thermapen probe. The next addition to the process was a 1200L/h aquarium pump (~$8), which created a vigorous circulation in the five liter reservoir.  So now I have a setup were the clunky old water bath provides a big lump of passively cooling thermal mass, while the sensors and my reference thermometer are held together by the conductivity of the aluminum block.

The full DS18b20 calibration rig, with ~10x1m sensors in what is essentially a star configuration via the intermediate breadboard. 2k2 pullup resistor.

With those new additions, re-running the 100-ish sensors I had on hand produced curves so smooth that it was easy to spot my own procedural errors ( when I did not wait long enough for the Thermapen to equilibrate ) because the DS18 curves had considerably less of the random scatter I had seen in the water-only runs:

BadReadings2

After removing the bad reference readings I generated the same series of y=mx+b fit equations as before, only this time I did not use a graph trend line to generate the constants (described in the previous post) but instead used the slope & intercept functions in excel. This lets you copy and paste the equations from one data set to the next with a macro which dramatically speeds up the process when you are chugging through more than thirty sensors at a time. It also adjusts & recalculates dynamically when you delete a row. Convert your reference thermoenter readings from Celsius into the equivalent 12-bit DS18b20 reading by dividing by the DS18’s resolution of 0.0625C

Now that the sensors were really were being exposed to the same temperature, I could use the raw and corrected residuals to prune the reference data even more by rejecting any single reading anomalies that made it through the first pass: ( residual(s) = reference – DS18 reading)

BadReference2

It’s quite an improvement to be able to use the DS18 output to polish the set because in previous tests (without the dry well) the high temperature residuals were so variable that it would have completely masked this kind of problem.

Using the groomed reference points,  it’s easy to see which sensors make the cut:

Criterion used to select sensors for DS18b20 sensor strings

With LSB points on the y axis of those graphs, the corrected residuals for a good sensor fall into a tight band between ±0.5 LSB. The bad sensor on the right has a huge spread, and was immediately tossed in the garbage.  In theory at least, I’m achieving calibration accuracy better than the sensor resolution of ±0.0625°C.  It would be a bit bravo to claim that, but I am confident that we reached the ±0.1°C target that I wanted for our field sensors, without doing a full laboratory level temperature sweep

Grouping the DS18b20 sensors into more refined categories

With only one or two exceptions, these sensors under-read. Of about 100, the largest group was (-3 to -5) LSB points with 12 sensors, followed by (-4 to -6) with 11 sensors. A small handful of sensors had consistent offsets with no slope, and only two sensors had positive slopes in their raw residuals. Overall, I rejected about 30% of them for one reason or another. To get really sweet sets, it’s a good idea to start your project with about twice as many sensors as you think you are going to need, from 3 or 4 different vendors. (though the ones from Electrodragon provided the highest yield…)

Another benefit of all this work is that I can now group my sensors using the raw residual trend-line values at both 20 & 30°C. (see top left graph for #328 above) This captures both the offset and the slope information, and by matching slopes I can assemble strings of sensors who’s uncorrected behavior does not diverge.  With the calibration corrections already in hand this might seem unnecessary, but experience has shown that being able to quickly spot trends in the raw data can be very handy in the field when you need to make on-the-spot decisions about a deployment.  I can also add that the sensors from different suppliers showed strong clustering in these groups, implying that production run biases would introduce an offset into group normalization methods (ie: without a reference thermometer to compare them to) unless you bought them from several sources.

So our temperature chain system is maturing nicely, and in a pattern that is becoming familiar, it took three generations to reach the point where I think my DIY builds could be compared to commercial kit without looking too shabby:

As time goes on I am reducing the number of interconnects, but even with longer chain segments I will probably stick with only 24 sensors per logger.

As time goes on I am reducing the number of interconnects to improve reliability. But even with the chain segments sporting more sensors, I only put twenty four nodes on a given logger body.    As my wife already calls the all white flow sensors ‘storm-troopers’, I added a touch of ‘vader black to spruce up the housing.   Hopefully, the force is strong in this one  🙂

Of course these guys still have to prove they can last a year under water, so for now we will continue with dual deployments (which we usually use for new prototypes) in case one drops out on us.  

I’ve only got solid power usage data from one of my 24 sensor chains at the moment as everything else is still out on deployment. On those early units, I had no idea what the total usage would be so I just packed in 12 AA’s. (4 banks x 3 in series) After the over-voltage burn down, the discharge curve settled to a steady 100mV drop per month. That unit had 24 sensors, taking 12-bit samples every 15 minutes. My current builds have only 2x 3AA banks, so I expect them to drop by about 200 mV per month. A low input cutoff of about 3.4v on the regulator means that I will probably get 5-6 months out of a fresh set of six AA’s.  Driving the DS18’s at their maximum resolution (ie: 24 x 1.5mA x 750ms =~27 mAs /sample), I think we would only make a year on 3AA’s if I moved the interval out to once per hour.

With respect to quality, the real test will come in post deployment testing when we find out how much these sensors drift over time.  And I am really keen to see how the units we deployed across that hydrogen sulfide layer performed, as that chemistry is far from benign.  If they prove hardy enough, we might even bury a few and see how that goes.  In harsh environments, every choice you make in the build is a cost/time-to-failure trade-off.

Addendum 2016-03-07

[jboyton] over at the Arduino.cc sensors forum pointed out that you could save the $200 cost of the Thermapen by using an Ultra Precision Thermistor from U.S.Sensor that offers ±0.05°C accuracy right out of the box. These 10K NTC units can be purchased from Digikey for ~$10 each. You could epoxy that sensor into the same stainless steel sleeves used on the DS18’s and be ready to go. The cool thing about this idea is that the reference readings could then be taken on a logger, avoiding those mistake-prone manual readings. Definitely going to look into this …

Addendum 2016-03-10

I’m doing post assembly water bath checks on the T-chains now. I had some lingering questions about whether epoxy contraction, or the heat I used in the process of making the string nodes, would throw off the calibration:

#74 was included in this group by accident....grrr

#74 was included in this group by accident…

You can see how closely matching the 20 & 30°C offsets groups the raw readings into a fairly tight 3-point band even though the sensors are now distributed around the water bath.   Applying the linear correction reduces that to 2pts, and shifts them all up towards true temp.  With the spread approaching the bit-toggling level I suspect I’ve reached the point of diminishing returns, so I’m calling this set ready to deploy without further normalization.

Addendum 2016-03-11

Segments 19 & 20 were constructed with some of the most stable sensors from that round of calibration, at least with regard to the corrected residuals.  But I did not have any other groups large enough to build the next few sets from DS18’s with identical performance,  so I loosened the criterion out to ±1 LSB, and started combining different offset groups on the same chain (mixing the sensors randomly so no trends would be introduced…)

The mix ended up producing graphs like this:                        (click images to enlarge)

Cave Pearl data loggersThe raw variability now crosses 10 points, and after you apply linear calibration you are still left with a 3-4 point spread depending on where you are on the curve.  So while the group as a whole should still be accurate, I now face a question of precision. Having those sensors flop around by as much as ±0.15°C could mask some of the more subtle signals we are looking for, and this forces me to apply a round of group-normalization to get them into the kind of tight band I saw from the prime sensors:

Cave Pearl data loggers

This second Y=mx+b correction is based on the average of the post calibration readings (so should have no effect on the accuracy) and you can still see some spaghetti squiggles from the inherent noise in these b-class sensors. I was hoping that the new method would save me from having to bang ’em together like that, but apparently it depends on how fiercely you apply your selection criterion before getting out the epoxy.

Which reminds me that automating the calibration run (ie: using a high accuracy thermistor as the reference) would let me test many more sensors and build those matched sets with less time playing button-monkey. Of course, I’d also have to ensure that I’m not introducing other errors in the process with the way I handle the thermistor readings…

Hmmmm…

Addendum 2016-06-29

Now that I have several more of these temp. chain loggers under my belt, another issue has popped up that could be taken care of easily at the calibration stage: Sleep current.  Sensors from some of the eBay vendors draw significantly more than the 750nA standby listed in the spec sheet.  A ‘good’ set of DS18B’s doesn’t add significantly to your loggers sleep current, but a couple of lemons in the batch can raise a 12-node segment to between 0.1-0.2 mA for the sensors alone.  This might explain how these cheapies get onto the grey market in the first place.

Addendum 2016-07-03

With the next generation of temp strings in production, I am still wrestling with the “diverse” sets assembled from sensors with widely varying behaviors.  These continue to need a post-calibration normalization round to bring the group behavior together:

Cave Pearl data loggers

What’s twisting my melon is that it’s hard to know how much of the post-cal normalization is actually beneficial, and how much of it is just compensating for differences in water bath, since the post epoxy sensors are no longer pinned together by that block of aluminum…?

Addendum 2016-11-16

Just a quick note to suggest another step to overall procedure: Measure the sleep current of your DS18b20! Good ones auto-sleep at about 1μA, but I’ve had a few batches now that constantly draw between 40-60 uA each.  This flaw was definitely vendor specific, so only buy a dozen or so from each to test them out before buying a large bunch. So far, I’ve had the best sleep currents with the $2.50 sensors from electrodragon, though their “waterproof” epoxy job is utterly pathetic, and the raw 20 & 30°C offsets are often just barely within the ±0.5°C spec. Also I have modified the script I use to gather the serial numbers so that it automatically sets the sensors to 12 bit every time it runs. That is the default resolution, but I found that one or two sensors from a given batch arrive set for lower bit depths. And finally, watch out for sensors that require parasite power turned on at the end of the conversion to operate properly even when they are being powered by all three lines. (ie: needing oneWire.write(0x44 ,1);  rather than oneWire.write(0x44); )  I have never gotten parasite and normal mode DS18b20s to work properly as a mixed bunch. Stick with normal mode.

Addendum 2016-12-04

Usually I do these calibration runs with sets of 32-34 sensors, so all the holes in the aluminum block are filled. But I recently tried to calibrate a small batch of eight “naked” DS18b20s with the same method, and ended up with slope and offset calibration factors that were 1-2 degrees Celsius too low, as compared to my reference.   I re-ran the cooling ramp with tape over the unused holes, and sure enough, the adjustments became much smaller, with the raw readings much closer to the reference.  Air was getting in and cooling the center of the block where the sensors were. So if you use this method on a smaller numbers, use other DS18’s to plug up all the unused holes, and put tape over the used ones to prevent air circulation from depressing the readings for raw sensors.  I hadn’t thought about it earlier, but the sensors themselves form part of the thermal mass of the system.  Once you start calibrating, you will have more than enough ‘duds’ (with the cables cut off) to plug up those wells when doing a runs with smaller numbers of sensors.

Addendum 2017-01-18

I just stumbled across a paper at MIT that makes it pretty clear that I’ve been reproducing a standard method already on the books. I’m still digesting the information, and there is lots of stuff in there about handling sensor frequency issues, but I figure most of that has been damped out by the oversampling in the DS18, and by the epoxy around the sensor nodes.

Addendum 2017-01-24

Using shower combs to separate sesors in the calibration bath

Shower combs from the local dollar-store worked well to keep the sensors separated.

We retired a few early generation temp. strings during the last round of fieldwork, and many of those had been deployed before I had a good calibration process. But we had months of data from them, so I now had the challenge of calibrating these sensors after the strings has been assembled.  With 24 sensors per chain, they amounted to  pretty hefty lump, and it would take some work to make sure they were all exposed to the circulating water bath properly. Some large combs provided the needed separation, and I mounted a 330Gph circulation pump near the center of the mass.

Cave Pearl data loggersSo now I could use the waters cooling curve to do a decent  normalization, but calibration with the was going to be tricky because of the thermal inertia of the epoxy around those sensors. I found a dead sensor from one of my early experiments, and drilled it out as a mount for the Reference Thermapen probe, providing it with approximately the same amount of lag as the sensors on the strings.

postdeploymentcalibrationThe loggers recorded readings every 5min over 24 hours, for the group normalization (y=mx+b) coefficients . I did manual readings with the Thermapen every 30minutes which provided at least one reading per degree as the bath dropped from 35°C to 20°C.  For that T-pen subset, I produced a second set of y=mx+b coefficients that corrected the group average used for normalization,  into the reference temps.  So the order of operations is reversed when compared to procedures that start with calibration of the raw DS18b20’s, but I’m happy to report that the process worked rather well, turning some jagged old data sets into smooth temperature profiles that compare very well to Hydrolab drop profiles from those same sites.  In fact it worked so well, that I think I will do it with every chain that comes home, with a careful eye on the chains that had a decent calibration as individual sensors.  I’ve seen plenty of pressure related problems in my other temp sensors, so I do wonder if the contraction of the epoxy is enough to hurt those initial calibrations?

Addendum 2017-04-28

While I like the $200 reference Thermapen I used for this calibration (and the NIST certificate that comes with it…) that ±0.04°C tool is probably out of reach for many DIY builders.  I recently discovered the Silicon Labs Si7051 at Tindie for $9, which gives you 14-bit resolution of 0.01°C at ±0.1°C accuracy. I’ve been using these I2C sensors to calibrate 5% thermistors  to better than ±0.2°C by baking all the system voltage errors into synthetic Steinhart-hart constants. That’s pretty good considering that the eBay thermistor and the reference thermometer together, cost less than a high quality thermistor.  I still have other details to work out before I’m building temperature chains with thermistors, but I don’t think I’ll be able to resist that temptation for long…

Addendum 2017-09-04

Just noticed a nice DIY dry bath over at instructables.com. Though I’m not sure its accurate enough for calibration, you could still use it on the “natural cooling curve” side like I do with my wet/dry method. I’m sure there will be more good calibration kit coming out of the bio-hacking boys soon and they did some beautiful DS18b20 calibrations over at http://www.kandrsmith.org using the melt plateau of 99.99% gallium.  And here’s an interesting “exposed” epoxy mounting of DS18b20’s in a liquid-cooled system.

Improving the Accuracy of 1-Wire DS18b20 Temperature Sensor Groups

Copper pipe for positioning the thermapen

Without the dryer lint stuck in the end, the last digit on the Thermapen sometimes does not settle. I use the same thermometer for individual sensor calibration later on, but in that case it’s the DS18B20 probe that moves in and out of the copper sleeve, while the Thermapen stays in the Calibration Bath most of the time. The somewhat annoying folding off/on feature of the Thermopen design forces you to take it out of the bath every 10 minutes as the unit times out.

As I dive into another batch of cheap DS18B20’s for a new set of temperature strings, I thought I should post a note about the pre-filtering I do with these guys before investing effort to properly calibrate and epoxy them. After attaching crimp pins & assigning each sensor a serial number,  I put several groups of them together in the water bath by binding them around a short length 1/2″ copper tube that has a felt plug at the bottom. The copper tube lets me keep the tip of a Reference Thermapen right in the center of the bunch. (The Thermapen has a resolution of 0.01°C  & accuracy of±0.04°C  – so not quite the ±0.01°C accuracy I should be using for my ±0.1°C target, but more affordable at ~$200 and it has a NIST traceable calibration certificate.)  I cover the whole thing with towels for insulation, turn off the bath heater and, manually take reference readings every 1/2 hour as the water cools from about 40°C to around 10°C.  I ignore the first few high temp readings as the bath convection stabilizes, and focus in on the curve at my temperature band of interest, which for the next deployment will be about 24°C.  On a batch of about forty sensors, you see a spread something like this:

Cave Pearl data loggers

( Note: 12-bit values on left axis = 0.0625°C/LSB.  The Thermapen reference line is the one with round markers, and has been converted to equivalent integer values)

These sensors usually  tend to have stable offsets to within 1 LSB, so they group together naturally into ‘bands’ of  2pt high, 1pt high, 1pt low, etc. although they tend to toggle up or down over the length of the curve.  I right-click on the lines and label the nodes with their behavior in Excel.

TrampLabeling2

Once a sensor is labeled I hide it from view (by un-checking the box), so that I can click & label the remaining sensors.  Once they are all roughly classified I bring up the sensors from each designated band to check if the high/low behavior of all the units in the group is stable over a larger section of the curve:

TrampTightBand1

You can see here that some of the units that were 1pt low between 350-380 change over to 1pt high down around 300. And sometimes I move the sensors into a different bucket after looking a bit more closely at their behavior in Excel.

Those are the raw 12-bit integer readings, so the graph above covers from 18.75 – 37.5°C, and over that range you typically get about half of your sensors toggling within one LSB of the reference, with a further subset of about 8-10 that lie right on the reference line. If I can live with a spread of ±2 LSB (ie: ±0.12°C), my yield from this batch of sensors goes up to 31 out of the original 43.  Given that these guys cost about $2.50 each, I just toss the rest into the bin before I do any further calibration.  Occasionally, I get a bad batch where I triage around 50%. But the thing is, I invest so much time mounting these sensors, that I would still be doing a suite of tests like this no matter where I bought them.  Time is the real cost when you build a logger with so many sensors connected to it.

Addendum 2016-02-15

If you want to do more than just eyeballing those graphs, a simple first pass is to subtract the RAW sensor reading from the 12-bit equivalent of your reference reading. Then take the average of those residuals as your bin category. (excluding high temperature outliers) This is still a pretty broad brush approximation of your sensor behavior though.

Of course once you get that far, you might as well plot the reference temps (Y axis) and the raw data (X axis) together and use Excel’s trend-line function to give you a correction equation that converts the sensor output into the reference:

Binning DS18b20 temperature sensors by checking residuals

Note: The spreadsheet above (Click to embiggen)  is in sensor output equivalent  numbers rather than Celsius or Fahrenheit. It’s just faster to translate the reference data into sensor equivalent units, rather than change all the other numbers in the spreadsheet into stanard temperatures. But the units you decide to use does not change the method, just the constants you get for your trend-line equation.

Since the 20-30°C range is usually the flattest part of the DS18b20 response curve, a linear trend-line usually brings my average residuals below the sensors LSB resolution of 0.0625°C.  In essence, this is a simple multi point calibration method that you could do with any reference thermometer and an insulated bucket of warm water.

If you have to go to a polynomial to get your residuals down in this temp range, you probably have a sensor that is going to be a pain in the backside to deal with.  I always have bigger residuals at 30-40°C than I do at 20-30°C but that is also the part of the curve where the bath is cooling more quickly, so I suspect that I have hot and cold spots cropping up in the physical system.  I’m now looking at adding a circulation pump, and perhaps making a dry well insert to put in the center of the bath.

This cooling ‘temperature ramp’ procedure is much different from the two point calibrations I was doing earlier, and I am still trying to determine which one gives better results when I combine it with the normalization that I usually apply to these sensor strings after they are epoxied into permanent sets.  So far, the actual measured ice point offsets have not been agreeing very well with the intercepts predicted by regression on data from 20-30°C and that has me wondering whether it’s even worth the effort to calibrate these sensors over a temperature range that they won’t see in the field.

DS18b20 normalisation bath

I insulate the heck out of the thing with towels, etc. so the normalization cooling takes at least 24 hours (usually closer to 2 days). I still take occasional Thermapen readings, but only to compare with the resulting average that I generate with the sensor data.  Ideally, the corrected average should already have very good agreement with the reference readings… or something has gone wrong with my procedure…

For the  ‘normalization’, I repeat the cooling water bath procedure above with the completed temperature chains assembled as they will be for a deployment. But instead of the Thermapen readings, I use the corrected average (using the equations from above) of all the sensor readings on the Y axis, to generate a second correction equation to apply to each sensors raw data to normalize them to each other. Unlike Craig at Yosemite Foothills, I use a second order polynomial for on this step. After that second correction is applied, any differences I see between sensors on the same chain should only be due only to a real differences in temperature.  

Addendum 2016-02-16

Well it serves me right for counting my chickens. After a few more days putting these sensors through their paces, 11 more units have died or they have stopped reading temperatures below 15°C.  I’ve been drying them on the radiators each night, and though we have a hot water system, I suspect the combination of daily soaking, and night time roasting, loosened the seals between the metal caps and the epoxy or did some other kind of damage. So now this latest batch is falling towards a 50% yield of sensors worth putting into the chain. Looking on the bright side, perhaps selecting sensors that can take more abuse is a good idea in the long run, even if it was done accidentally.

Addendum 2016-02-29

NY platform

An air filled unit from eBay vendor nyplatform. Looking on the bright side, it’s very easy to extract the raw sensor if you wanted to.

After having a few more units die in the middle of testing it started to really get under my skin. So I decided to take a bunch of these sensors apart and soon discovered that many of them were poorly put together (no big surprise there…) I had assumed that the metal casings were completely filled with epoxy, but I soon discovered that many were sealed only with a bit of heat shrink tubing. So the thermal cycling I put them through on the radiators loosened it enough for them to soak through on their next dunking.  Even when they had adhesive (like the unit shown on the right) they sometimes exhibited other strange problems as time went on such as not reading temperatures below 10°C so I have to wonder if the solder joints are also suffering from thermal expansion problems.  Although the nyplatform sensors were the cleanest looking units inside, they suffered the highest failure rate, and had the largest raw residuals, often outside the ±0.5°C spec.  So just being clean, does not necessarily mean best quality…

Things got even uglier as I moved on to the other vendor’s:

Smelly bad

That just can’t be good…

Though the units from Electrodragon had at least some epoxy inside, they were still easy to simply pull away from the stainless sheath, and when I did that I discovered some nasty smelling chemistry going on in there.  The frustrating thing about this is that during calibration runs these sensors were by far the best performers in terms of offsets & accuracy. But after seeing inside I have to wonder how long they are going to last.  I have started pulling all of these units out of the metal and cleaning them with 90% IPA so that I can mount them directly inside the epoxy. Hopefully this treatment will halt that creeping decay and keep them running longer than they would have if I just left them as they were.

DS18b20 temp sensor embedded in E30CL epoxy.

DS18b20 after cleaning & embedding in epoxy.

So I guess the take home of all this is that these cheap sensors should only be considered notionally waterproof in the same way that the round red things you buy at the supermarket in the middle of February are notionally tomatoes. Of course, once I strip them from the casing, they could be subjected to more pressure at depth, which had bad effects on my other temp sensors…Argh!

Addendum 2016-07-27

WordPress does not let me move comments from one post to another, so I am transposing this reader question over as an addendum:

João Farinha asks:
Great Work, I’ve a question if you don’t mind. The length of probes DS18B20 are small and several times I had to make extensions, used single-wire cable (Ethernet cable), sometimes the probes stop to work as the union between cables is made with solder, did this ever happened to you? if I just wrap the wire to the other and put duct tape around works well but is not as solid.
Cheers

I solder everything, and so far I have not had any problems with signal bounce on chains up to 25 m in length. But I do not use Ethernet cable, I use M12C Series Extension Cables from Omega.com  as these are rugged enough for my underwater application.   However I should mention that I have ruined several sensors in the past by having my soldering iron too hot when I tried to put the jumpers on the legs of naked DS18b20’s.  I simply cooked them by taking too long.  Wire wrapping could work if your leads were clean, but it takes practice, and a good tool for the job. 

Generally I buy the waterproof units that already have 1-2m of wires connected (as pictured in the post above) since I need at at least short term waterproof capability to run the sensors through the overall calibration procedure  before soldering them into a chain.

Field Report 2015-12-15: New DS18B20 1-Wire Sensor Chains Installed

A typical DS18b20 temperature string deployment

A typical deployment

The August deployment produced both success and failure from our 1st-generation temperature strings. We still managed to get both of those older units back on their feet with fresh batteries, and we add two beta units to the set.  I really had to scramble to get the new chains ready in time because we are spending more time on testing and calibration as I try to squeeze the best possible performance out of these humble DS18b20’s. It takes me about a day to solder and epoxy a  section with 8-12 sensors, so these instruments also represent a significant amount of build time. (note: the length of the wires in between nodes does not affect that time very much)

Although both of the alpha loggers passed the overnight tests following their first run, the shorter (25cm) chain developed a reading problem as soon as it was powered up. The fact that this error occurred before the unit went near the water tells me that it was either a sensor failure, or a problem with the connectors. I have been using Deans 1241 micro connectors between the segments because they seem really robust, but my gut tells me those break points could also add some signal reflection problems.

Here I am 'prospecting' for thermals by dangling a 24m chain from a life jacket and moving it around the cenote. I thik I will put a display screen on one of the next units to make this task easier.

We went hunting for potential deployment locations by dangling a 24m chain from a life jacket and moving it around the cenotes to generate profiles. I think I will put a display screen on one of the next units to make these ‘prospecting’ trips easier in the future.

The logger itself ran for the duration, but the log data was a string of the dreaded 85C (ie: 1360)  and ‘-1’ read errors.  Since these numbers are fairly distinctive, I will put an error check in code on startup to see if I can intercept this kind of problem in the future.  At least the pressure record from the MS5803 on the housing survived intact, and that sensor seems to be working again now that I have removed the Qsil silicone coating that I had over top of the sensor on the previous deployments.

I isolated the read fault to the first segment of the temperature sensor chain, and when that section was removed the rest of the sensors ran well enough. We decided to re-deploy the parts that were still running  (although the chain is now less than four meters long) and I brought the dodgy section home for some forensic testing. I am suspicious of the U-09LV urethane that I used on a few of the nodes, thinking that it’s higher moisture resistance might not compensate for the stiffness and overall durability of E-30CL.

Fortunately, the longer chain that we deployed in the deeper inland site performed well, giving us another record with sensors spanning the halocline:

raw data

Two months of raw DS18b20 output, Logger 46  (10m cable with 20 sensors). Note: the warmer temps shown at the top of this graph are from sensors deeper in the water column, while the cooler temperatures are from shallow sensors in fresh water

Even with relatively long 50 cm spacing, the large rain events of the season pushed the fresh/salt boundary around so much that several sensors (indicated here with 48pt moving averages) switched from the saline, to the fresh water, and then back again. It will be interesting to see if those bands tighten up, or spread out, after we apply our normalization factors.

New DS18b20 Temp strings ready to deploy.

This 6m x 24node chain has a 10m extension, allowing us to change sensor positions in the water column by simply tying off the excess.

After several meetings to obtain permission from the landowners, we managed to install our new set of DS18b20 temperature strings.  We decided to co-deploy a combination of high and low spatial resolution chains, so that we still have a good chance to get data, if one loggers dies.  Due to memory limitations, etc. I built them with twenty four sensor nodes per logger, and even with those spread out over 24m of cable, 3k3 pullup resistors are enough for the one wire communications. That’s aggressive enough to give me some concern about self-heating if I was doing multiple readings, but I figure that with the bus at 3.3v it probably just comes out in the wash.

This deployment site had significant amounts of hydrogen sulfide at depth which forms a visible layer that is shown well by these photos from Angelita.  It will be interesting to see how the chemistry affects our sensors. It certainly had an effect on me, as I was a little worse for wear after that dive.

<— Click here to continue reading the story—>

Field Report 2015-08-12: Success with DS18B20 1-Wire Temperature Chains!

OMG! It worked. Woot!

Yep, that’s me grinning like a fool. It might not be an X-prize, but it worked! IT WORKED!

We were already happy with the flow & drip sensor data from this trip because, despite the TMP102 problems, most of the units had performed brilliantly.  But for me, the real prize of the season was going to come from the underwater temperature strings that began their first ‘real-world’ trials on the last trip. Because the cave they were in presented some challenges, we didn’t pull those units until we had a few good dives under our belts. Back in March, I had just changed over to slimmer builds in 2″ PVC pipe,  and the flow meters at this site were the deepest deployment so far for those new housings. So every unit in this cave was testing something important, and I hoped to take plenty of photos despite the fact that we were right at our little cameras limit.

Trish had line duty (for this part of our dive), and she captured a reasonable shot of me inspecting first flow meter from there. Though I had only been at the sensor for a few moments, you can already see a ball of dust starting to form overhead:

After that we installed a new ceiling anchor, with a descender rod to put a Pearl in the deeper saline flow.  Unfortunately, all that faffing perc’d out the site, so I only managed a tiny clip of one temperature string in-situ before the cloud of pea soup drifted over:

That logger supported a fairly short 5m sensor chain, with 19 nodes spaced at 25cm. It was installed across the halocline, and I admit that I was concerned that (with a minimum increment of 0.06°C) those humble DS18B20’s would not have enough resolution to track the fresh/salt water interface. In addition, I had assembled the string in segments that were linked by my new diy underwater connectors, so these builds had more potential failure points than I even wanted to think about.  It’s probably a good thing that that our dive schedule was so full that I didn’t have time to look for LED heartbeat pips while we were still under water.

Following that long dive, we had a bumpy crawl back to the main road which put more than a few new scratches onto the rental car. After one bone-shaker, Trish observed a logger going through it’s startup sequence on the indicator LED.  A power blip like that during an SD write could toast the card, destroying all our data!  I asked her to cradle the new babies till we got back to the paved highway. When we reached Tulum, we returned our tanks, stowed the gear, and bolted down a couple of tacos in record time. I might even have exceeded the speed limit a bit on the way back to the room…

But after some tough dives, and months of waiting, this was the result from #045:

Cave Pearl data loggers - DS18b20 Temperature string
Note: I inverted the temperature axis (left side) to match the physical situation: the saline water was warmer at depth, with a cooler fresh cap layer. The black traces are 96 point (1 day) moving averages.

The deeper saline water was a full degree warmer than the shallow fresh water, giving plenty of spread for the DS18’s.  And with 25cm spacing we managed to plant one sensor right in the middle of the halocline, capturing its cycles of expansion and shearing away. And that’s just the raw data!  Even without the calibration corrections it was easy to see that we nailed it. Unit #046 gave an equally complete log, but with its larger 50cm spacing the sensors straddled that fresh/salt boundary, so we simply have an empty gap on the plot. Of course to a karst hydrologist, knowing the limits of mixing zone is also useful information

After the initial excitement over that temperature data died down, I proceeded through my usual set of post deployment checks. I was keen to compare the power curves from the two loggers we had deployed:TempLoggerPowerCurves

I knew those sensors were going to pull a substantial amount of juice during 12-bit conversions, but putting twelve AA batteries in the housing was still something of a shot in the dark for me back in March. While both curves looked smooth , there was something odd about #046 using less power than #045, because it had one more sensor (total of 20) and they were stretched out over 10 meters of cable so #046 also had a more aggressive pull-up resistor on the bus.  A bit more poking around and I noticed that I had accidentally reversed a cell in one of the banks  (those with sharp eyes probably  spotted that in the photo above) so #045 had actually run on only three sets of AA batteries. Shottky’s isolate each bank against battery failures, but it’s nice to know that they also protect the little loggers from my own dumb mistakes. With 46’s full complement of 4x3AA batteries only loosing 0.5 v over almost four months, I’m confident these loggers could approach my one-year operating target on a fresh set.

The marine heat shrink tubing adhesive after four months

After four months under water the adhesive on that marine-grade heat shrink looked a bit flaky, but there is ECL30 potting the wires inside the adapter so I’m not worried about the seal.

Of course that’s predicated on everything staying water-tight, so I examined each temperature sensor very carefully: looking for evidence of water damage. Most of the nodes were filled with hard epoxy (E-30CL), and for some I was trying out a more flexible urethane. (U-09LV )  They were all remarkably clean, with only one node showing yellowing, and that one was a botch where I had split the original sheathing under the heat gun, and had to re-wrap it. I was also pleased to see that my DIY underwater connectors proved to be robust. They were all were bone dry inside, with no hint of oxidation on the contacts. It was looking like we would be able to re-deploy these units right away!

While I was examining the hardware, Trish had been chewing on the data from both of the loggers. At one point she started making funny “Hmmm…” noises which I know she only makes when she disagrees with something, but is being too polite to say so.  (You hear that kind of thing a lot at academic parties…)  When I asked her what was wrong, she showed me the pressure log from one of the MS5803’s:

045_PressureLog

Damn! Even with surface barometric corrections it was obvious that the rising trend in this record was out of sync with our other water level recorders.  And with 1 millibar of pressure being approximately equivalent to 1 cm of water depth, the implied 4m delta is simply ridiculous. I immediately suspected that the Qsil 216 I had put over the pressure sensor was doing something weird. I’ll need to do some homework to sort out what actually happened, but I’m guessing the silicone started absorbing moisture at depth since the stable cave environment is unlikely to cause problems from thermal expansion.

#046 was ready to roll the next morning.

#046: ready to roll the next morning.

So we didn’t get a hat-trick on these first builds, but by 2 am I had new batteries in place, the clocks updated, and I had carefully peeled away the silicone over the MS5803s. (hopefully without damaging the factory gel caps). If the overnight run looked good, we hoped to install #046 in deeper cave (~24m) the next day.

<— Click here to continue reading the story—>

Calibrating DS18B20 1-Wire Sensors with Ice & Steam point measurement

You will need to crimp the ends and give each sensor a serial number, but don't label the sensor itself as I have in this photo or the will fall off during the steam point testing.

Give each sensor a serial number, but don’t label the sensor itself as I have in this photo or the labels will just fall off during the steam point testing. After adding crimp pins to the wire ends it becomes easy to gang them together on a breadboard for testing. Despite Maxim’s warnings, I had star configurations above 20 sensors reading well with them close together like this.

I’m probably not the first person to note that sensor calibration is one of the big differences between the mountains of data coming from the citizen science movement and that produced by research professionals. (…mea culpa…) After opening this can of worms, I think I am beginning to understand why: Accuracy calibration rapidly gets complicated, or expensive, and often it’s both at the same time. By the time you have what you need to do the job, the difference between a $0.30 sensor, and a $30 sensor, is pretty insignificant. So it’s no surprise that few people work on calibration methods for low cost sensors, or why normalization approaches are used instead.

But I am already spending far too much on this little hobby, so despite knowing that the folks over at Leighton Telescope managed to get their DS18b20’s to about ±0.01°C with a NIST traceable thermometer, I thought I would see how far I could get on my own.  I suppose if I was an alpha geek,  I would make my own platinum RTD  and calibrate the sensors against that.  But I’m not quite there yet. I should also point out that numbers are not my strong suit, so there could be some significant errors in what I have cobbled together here and I appreciate any feedback to help correct those…

The first thing that occurs to me is: Can you read the temperature more accurately  by averaging a bunch of these sensors together?  If the readings from the sensors have a mean and standard deviation, then as the number of sensors increases then the standard deviation should decrease…right?  The data sheet gives you a sense of how far you can get with that approach, because I assume that Maxim/Dallas used a very large number of sensors to derive their typical performance curve:

DS18B20_TypicalPerformanceCurve

But if I understand what people say about this graph, the only reason the 3 sigma spread on that graph looks better than ±0.5 at 20°C is because the errors in the sensors used to derive that curve were truly random, and had a nice Gaussian distribution around that mean. However, since the actual batch of sensors I am holding in my hand is likely from the same production run, it is subject to systematic errors that don’t cancel each other out so nicely. And since I bought them on eBay, there is also a chance that they might be fake DS18b20’s.  So I could have no idea how my mean error line was related to the one on Maxim’s graph.

But there are still useful things you can do with this kind of averaging:

test

The front temperature display on this clunky old Fischer Scientific was off by more than 2°C, and it was missing a foot. While it’s hardly a temperature chamber,  the insulation and covering lid produced a slow cooling curve, so I could be reasonably confident the sensors were being exposed to the same temperatures. Don’t use data from the rapid heating cycle, because temperatures are likely to be unevenly distributed in the bath.

First of all you can get rid of the bad sensors by selecting a group that has a consistent behavior over the temperature range you are looking at, with readings that fall within the manufacturer’s specifications.  To get enough data for this kind of assessment, I needed to run at least 10 sensors at the same time so that the average had some statistical weight. For this testing I picked up an old five Litre isotemp bath (you can find them for $25-$50 on eBay) but you could just as easily do this with hot water in a styrofoam cooler. With about 20 sensors on a breadboard in a star configuration (4.7k pullup), I brought the water bath up to a stable 40°C, and then moved the entire thing out into the fridge and left it logging during the cool down. The lid was on, and I had several towels over top to make the process go as slowly as possible.  It took 12 to 24 hours for each batch of sensors to reach ~5°C.

With this data in hand, I looked at the residuals by subtracting each sensors raw reading from the average of all the sensor readings.  This exercise sent one DS18 straight into the bin, as it was more than 2.5°C away from the rest of the herd for its entire record.  Another was triaged due to a strange “hockey stick” bend in it’s residual around 25°C.  I threw out the data from those two duds, and recalculated the average & residuals again.  Just to be on the safe side I decided not to epoxy any sensors into a long chain if they were more than 0.3°C away from the average. (although I am still wondering whether eyeballing residuals like this is enough to exclude the right outliers?)

You can then normalize the sensors to each other by fitting a quadratic equation to a graph of each sensor & the overall average line. Excel can generate these coefficients for you with the linest function, or it can solve the quadratic with Goal Seek.  But the easiest method I found was make a 2nd order (but not higher) fit with the chart tool’s trendline function. Make a scatter plot of the data with the averages on the Y axis, and data from one individual sensor on the X axis.  Then right click on the data points to select them, and choose ( Add Trendline ) from the pull down menu, with the [ ] Display equation on chart tick box checked.      (here is an example of the technique using an older version of Excel)

The equation you see displayed will convert that particular sensors output into corresponding temperatures on the average line. With this transformation, each sensor will yield the same reading if it is in the same thermal environment, and you can accept that any differences between two sensors in the chain represent real differences in temperature.

This kind of normalization is as far as most people go. However for the reasons I outlined above, we can’t be sure that we were using a valid sample for that mean data. In my tests it looked like I did not have an equal distribution of sensors above and below the average line, so I still didn’t really have a handle on whether this was improving the absolute accuracy. (I will post some example graphs of this later…)

That brought me to calibrating the DS18b20’s against intrinsic physical standards which rely on the fact that during a phase change  (melting, freezing or boiling)  adding and removing heat causes no change in temperatureIn fact those heating curve plateaus are known so precisely that they use them at NIST to calibrate the expensive thermometers that I am trying to avoid buying.  Today they do this with Gallium‘s triple point (29.7666 °C) and the triple point of water (0.010 °C), but they used to use Gallium’s melting point plateau. (29.7646 °C)  Gallium sells for less than a buck a gram on Amazon and a density of about five grams per cubic centimeter means a block big enough to surround one of the DS18’s is almost within a DIY’ers budget. (100 grams will make a disk about two inches across and a quarter inch thick) But considering that commercial Ga melt cells cost about three grand, either that stuff is nasty enough to get me into trouble, or you need allot more more of it, at higher purities than you can buy on eBay to build one.  Then there is the significant time it would take to refreeze the block again for every single sensor. And finally, all exposed metal must be carefully lacquered as Gallium will form an amalgam with many metals, and any dissolved metals will compromise the purity of the bath, shifting the melting point. And you would probably have to cover everything with Argon wine preserver.

So I went hunting for other substances I could use for a mid range calibration point and found several good boiling points such as: Ether (35 °C), Pentane (36.1°C), Acetone (56 °C), and Methanol (66 °C). Despite my enthusiasm over coffee the next morning,  all of them were summarily rejected by my wife, who strongly suggested that I look for calibration procedures that do not create large amounts of highly explosive vapor. Given how unstoppable she usually is in the pursuit of  good data, I was not expecting this outburst of common sense 🙂

So I looked at the other primary standard used to calibrate pt100’s. Turns out it is possible to make your own triple point cell, and if that’s not good enough for you,  Mr. Schmermund also produced plans for a freezing point of mercury cell (–34.8 °C) (See: “Calibrating with Cold”, Shawn Carlson, Scientific American, Dec. 2000 issue).  However the local 7-11 was fresh out of liquid nitrogen when I checked, and I had this gut feeling that risking mercury induced brain damage was not going to pass the cost/benefit analysis either. If I actually did need sub zero calibration I think I would try using Galinstan, (−19 °C) which is now replacing mercury in glass thermometers.

If you can pre-chll the sensors in one corner of the bath, the whole process goes much faster.

Pre-chilling the sensors in one corner of the bath makes the process much faster. Hold the sensors by the cable, not the metal sheath, or heat from your hands will affect the readings.

It was looking like calibrating against anything other than distilled water was going to take a substantial amount of effort compared to what I was seeing in the NIST and EPA videos. Most sources indicated that the ice point and steam points were at least an order of magnitude more accurate than my ±0.1°C target, making them suitable for the exercise.

While the overall procedure is pretty easy, it did help to practice a few times to get a sense of when I could trust the readings. Checking that you have just the right amount of water in your ice bath makes a big difference, and don’t run the sensors at full tilt or they will self heat. (I left 15 seconds between readings) Since errors on my part would cause the sensor to be warmer than the true ice point, I took the lowest reading, while stirring, as my final reading. The difference between stirring, and not stirring was usually 1-2 integer points on the sensors raw output (0.0625-0.13 C) and this was consistent for all the sensors.

If these sensors were linear then reading the ice point was a direct measure of the b in y=mx+b. And this got me wondering if one point calibration was enough all by itself.  But once again my wet blanket science adviser assured me that nothing on those graphs told me if the offset was constant over the sensors range. Hrmph! (Although according to Thermoworks, ice point alone can be a good way to check for drift, because the most common error in electronic temperature sensors is a shift in the base electrical value)

I found a silicone vegetable steamer lid for the calibration that had three DS18B20 sized holes in it already.

I found a silicone vegetable steamer lid for the calibration that had three DS18B20 sized holes in it already.  Getting the right pace for your slow rolling boil is important, and this lid sheds the condensed water back into the pot reasonably well. Alligator clips also help speed the process.

So I moved on to measuring the steam point. Water’s boiling point is not necessarily at 100°C and the only factor that is really involved in the variance is atmospheric pressure. Altitude is often given as an alternative when pressure information is not directly available and there are plenty of places to look up elevation and barometric pressure data (& converters) for the necessary corrections.

I already had some MS5805-02 sensors on hand, so with the help of Luke Millers library, I could read my local atmospheric pressure for the correction directly.  The accuracy of my pressure sensor was ±2.5 mbar (similar to  the more common BMP180) with the B.Pt adjustment equation being: Corrected B.Pt.=100 (°C)+((PressureReading-1013.25mbar)/30)  So the 5 mbar total error range in the pressure sensor could change the adjusted boiling point by up to 0.166°C.  This means that the error in my pressure sensor measurement is at least as significant as the other aspects of this procedure. Better than the default ±0.5°C, but it puts a limit on how accurate I will can get with my steam point measurement.

Doing multiple sensors at once saves significant time, but be carefull or you will pay the piper with a couple of burn fingers

Cutting down the stacks on the Fred steamer lid allowed me to do multiple sensors at once. This saves time, but be careful or you will pay the piper with a few burned fingers when you change them out.

Each sensor took about 5 minutes to warm up to reading temperature with the water on a slow to medium boil (and it was easy to see that on the serial monitor) I didn’t consider the test done till I saw at least a full minute of stable output (reading the sensor every 10 seconds) Since errors in my technique would produce readings on the cold side, I took the highest ‘frequently repeated’  number as the final reading. Most sensors settled nicely while some of them toggled back and forth by one integer point from one reading to the next.

In comparison to the steam point procedure, I trust the Ice point as more reproducible because it does not suffer from any pressure information dependency. Perhaps more important is the fact that 100°C is far away from my 20-30°C target range, leaving the possibility for significant errors if the sensors have a non-linearity problem.

With the ice and steam readings in hand, I could construct a two-point calibration for each of my DS18B20’s with slope M=Δy/Δx, and B=(the ice point reading).  (explained here, and that left you in the dust there are lots of fill-in-the-blank spreadsheet templates on the web)

At this point I am still doing tests & chewing on numbers, but the standard deviations around the mean line are being reduced by this ice&steam point calibration. The problem is that even after I apply the resulting slope and intercept I still have significant residuals from a mean that is derived with the corrected numbers. I thought that the two point calibration would make the graphs the individual sensors line up very closely with one another, and that they would have nearly identical slopes(?) I am left wondering if larger sensor errors up at 100°C mean that I need to apply some additional process to normalize my sensors to each other in the 30°C range after doing the two point calibration.  But using the process I described above would generate ‘b’ value corrections, and I am very reluctant to modify my y intercept numbers because I think using the ice point to measure that offset is robust. These doubts about accuracy of the steam point, the sensors linearity, and my lack of a nice “mid-range” standard to calibrate against, have me hunting for a method which would gracefully combine a single (ice) point calibration with normalization. And the dip in the datasheet’s mean error curve between 20-30°C implies that even after applying ice point corrections my average line will still be 0.05°C lower than actual (?)

Another important observation is that the means generated from the uncorrected data were within 0.14-0.16°C of the means calculated after applying the two point calibration. Either my sensors actually did have a reasonably normal distribution of error, or I might have missed something important.  The implication in that first case is that normalization alone should improve your overall accuracy, but I still need to get my hands on a calibrated pt100 to know for sure….Argh!

Addendum 2015-05-18

Bil Earl just posted a beautifully written article on sensor calibration, which puts everything here into context. A great job once again by the folks over at Adafruit!

Addendum 2016-02-12

Just adding a quick link to a small post on the pre-filtering I do with these sensors, which I only posted because no one else seems to bother posting data on the ‘typical quality’ you see with the cheep eBay sensors. And after splashing out on a Thermapen reference thermometer ($200), I can try a multi-point calibration for these sensors that is closer to my target temperature range.

Addendum 2016-03-05

Just put the finishing touches on a new calibration approach which, compared to this ice & steam point method, was an order of magnitude faster to do. If you are calibrating a large number of sensors, the reference thermometer is definitely worth the investment.