I’m developing a family of environmental monitors for use in caves and underwater, but the basic three component logger platform will support a wide range of different sensors.
We’ve been using a reverse bias discharge technique to turn the indicator LEDs on our loggers into light (& temperature) sensors for several years. The starter code on GitHub demonstrates the basic method but as the efficiency of garden variety RGBs continues to improve, I’ve noticed that the new ‘super-bright’s also seem to photo-discharge more rapidly than older LEDs. Sometimes the red channel discharges so quickly that we hit the limit of that simple loop-counting method with our 8Mhz processors.
Normally when I want more precise timing, I use the Input Capture Unit (ICU) which takes a snapshot of the timer1 register the moment an event occurs. This is now our preferred way to read thermistors, but that means on most of our deployed loggers the ICU on D8 is already spoken for. And a multi-color LED offers interesting ratio-metric possibilities if you measure each channel separately. That prompted me to look into PIN CHANGE interrupts, and I’m happy to report that, with a few tweaks to suspend other interrupt sources, Pin Change & Timer1 can approach the limits of your system clock. So results are on par with the ICU, but Pin Change extends that ability to every I/O line. With slow sensors, where the counts are high, I usually put the system into sleep mode IDLE to save battery power while counting, but that adds another 6-8 clock cycles of jitter. Sleep modes like power_down are not used because the ~16,000 clock cycles that the processor waits for oscilator stabilization after deep sleeps makes accurate timing impossible.
If you are new to interrupts then Nick Gammons interrupt page is definitely the place to start. (seriously, read that first, then come back & continue here…) The thing that makes working with interrupts complicated is that microcontrollers are cobbled together from pre-existing chips, and then wires are routed inside the package to connect the various ‘functional parts’ to each other and to leads outside the black epoxy brick. Each ‘internal peripheral’ uses a memory register to control whether it is connected (1) or not (0) and several sub-systems are usually connected to the same physical wires. Each of those ‘control bits’ have names which are completely unrelated to the pin labels you see on the Arduino. So you end up with a confusing situation where a given I/O line is referenced with ‘named bits’ in the GPIO register, and other ‘named bits’ in the interrupt peripheral register, and yet more ‘named bits’ in the ADC register, etc. Pin Maps try to make it clear what’s connected where but even with those in hand it always takes a couple of hours of noodling to get the details right. I’m not going to delve into that or this post would scroll on forever, but there are good refs out there to Googlize.
Fast Reading of LED light sensors:
#include <avr/power.h> #defineRED_PIN 4 // my typical indicator LED connections #defineGREEN_PIN 6 #defineBLUE_PIN 7 #defineLED_GROUND_PIN 5 //common cathode on D5 volatile unsigned long timer1overflowCount;
// Reading the red channel as a stand alone function:
uint32_t readRedPinDischarge_Timer1() {
// discharge ALL channels by lighting them briefly before the reading digitalWrite(LED_GROUND_PIN,LOW);pinMode(LED_GROUND_PIN,OUTPUT); pinMode(BLUE_PIN,INPUT_PULLUP);pinMode(GREEN_PIN,INPUT_PULLUP); pinMode(RED_PIN,INPUT_PULLUP);
//execution time here also serves as the LED discharge time bytegndPin =(1 << LED_GROUND_PIN); bytekeep_ADCSRA=ADCSRA;ADCSRA=0;bytekeep_SPCR=SPCR; power_all_disable();// stops All TIMERS, save power and reduce spurious interrupts bitSet(ACSR,ACD); // disables the analog comparator
digitalWrite(BLUE_PIN, LOW);digitalWrite(GREEN_PIN, LOW); digitalWrite(RED_PIN, LOW); //end of the LED discharge stage
//reverse prolarity to charge the red channels internal capacitance: pinMode(RED_PIN, OUTPUT);pinMode(LED_GROUND_PIN, INPUT_PULLUP); _delay_us(24);//alternative to delayMicroseconds() that does not need timer0
noInterrupts(); // enable pin change interrupts on the D5 ground line bitSet(PCMSK2,PCINT21); // set Pin Change Mask Register to respond only to D5 bitSet(PCIFR,PCIF2); // clears any outstanding Pin Change interrupts (from PortD) bitSet(PCICR,PCIE2); // enable PinChange interrupts for portD ( D0 to D7 )
set_sleep_mode (SLEEP_MODE_IDLE); // this mode leaves Timer1 running timer1overflowCount = 0; // zero our T1 overflow counter
// reset & start timer1 TCCR1A = 0; // Compare mode bits & wave generation bits set to zero (default) TCCR1B = 0;// Stop timer1 by setting Clock input Select bits to zero (default) TCNT1 = 0; // reset the Timer1 ‘count register’ to zero bitSet(TIMSK1,TOIE1); // enable Timer1 overflow Interrupt so we can count them bitSet(TCCR1B,CS10);// starts timer1 prescaler counting @ 8mHz (on 3v ProMini) interrupts();
PIND = gndPin;// faster equivalent of digitalWrite(LED_GROUND_PIN,LOW);
do{ sleep_cpu(); }while ( PIND & gndPin ); //evaluates true as long as gndPin is HIGH
TCCR1B = 0; // STOPs timer1 (this redundant – but just making sure) bitClear(TIMSK1,TOIE1);// T1 Overflow Interrupt also disabled sleep_disable();
bitClear (PCIFR, PCIE2);// now disable the pin change interrupts (D0 to D7) bitClear (PCMSK2,PCINT21);// reset the PC Mask Register so we no longer listen to D5 bitSet (PCIFR, PCIF2);// clear any outstanding pin change interrupt flags
pinMode(RED_PIN,INPUT); pinMode(LED_GROUND_PIN,OUTPUT); // normal ‘ground’ pin function for indicator LED return ((timer1overflowCount << 16) + TCNT1); //returning this as uint32_t, so max allowed is 4,294,967,295 }
// and the required ISR’s ISR (TIMER1_OVF_vect) { timer1overflowCount++; if(timer1overflowCount>10000){ // this low light limiter must be <65534 DDRD |= (_BV(LED_GROUND_PIN)); // sets our gnd/D5 pin to output (is already LOW) // Bringing D5 low breaks out of the main do-while loop TCCR1B = 0; // STOPs timer1 //CS12-CS11-CS10 = 0-0-0 = clock source is removed } }
ISR (PCINT2_vect) { // pin change interrupt vector (for D0 to D7) TCCR1B = 0; // STOPs timer1 DDRD |= (_BV(LED_GROUND_PIN)); // forces GND pin low to break out of the sleep loop }
Key details:
A 1k resistor was present on the LED’s common GND line for all these tests, but the limit resistor has no effect on the photo discharge time.
The code above tweaks our standard discharge method (on GitHub)with port commands & PIND when things need to happen as fast as possible, but also uses slower digitalWrite/pinMode commands in places where you want to spend more time ( in the pre-read channel discharge steps ). The power register lowers current draw during SLEEP_MODE_IDLE, but power_all_disable(); also shuts down Timer0, so those pesky 1msec overflows don’t disturb the count. Waking from SLEEP_IDLE adds a constant offset of about 8 clock cycles , but it reduces the jitter you’d normally see with the CPU running. One or two clock cycles of jitter is normally unavoidable with a running processor because you can’t respond to an interrupt flag in the middle of an instruction. Interrupts are also blocked when you are processing some otherinterrupt, so if the AVR is dealing with a timer0 overflow – the LED triggered pin change would have to wait in line.
This Timer1 method increases resolution by an order of magnitude (so you can measure higher light levels) but that lead me to the realization timing jitter is not the major source of error in this system. Before light even reaches the diode it is redirected by the LED’s reflective cavity and the encapsulating lens. Sampling time is also a factor during calibration because light levels can change instantaneously, so any temporal offsets between your reference and your LED reading will also add noise.
Does light sensing with LEDs really work?
One way to demonstrate the limits of a garden variety RGB is to cross-calibrate against the kind of LUX sensors already in common use. Most LED manufacturers don’t worry much about standardizing these ‘penny parts’, so {insert here} all the standard quid pro quos about the limitations of empirically derived constants. I covered frequency shift in the index-sensor post, and there’s an obvious mismatch between the wide spectral range of a BH1750 (Lux sensor) and the sensitivity band of our LED’s red channel:
Most of us don’t have a benchtop source to play with so I’m going try this using sunlight. The variability of natural light is challenging, and the only thing that lets me use that LED band as a proxy for LUX is that intensity from 400-700nm is relatively consistent at the earths surface.
The most difficult lighting conditions to work with are partially cloudy days with many transitions from shadow to full sun. Because the reference and LED sensors are in different physical locations within the housing shadows that cross the loggeras the sun moves across the sky will darken one of the two sensors before the other if they are not aligned on the same north-to-south axis before your tests.
Skylight also undergoes a substantial redistribution of frequencies at sunrise/sunset and that may produce a separation between the response of the ‘yellow-green’ sensitive red LED channel, and the wider sensitivity range of the BH1750.
The biggest challenge for a cross calibration is that LEDs don’t match the ‘Lambertian’ response of our reference. A bare silicon cell has a near perfect cosine response(as do all diffuse planar surfaces) producing a perfectly spherical pattern on polar intensity diagrams. The BH1750 comes very close to that, but LED’s have a range of different patterns because of their optics:
Directional Characteristics of the BH1750 from the BH1750 datasheet (Fig.5 Pg 3) This plot is in the style of the right hand side of the Broadcom diagram which shows both polar and linear equivalents.
Relative luminous intensity versus angular displacement. from: Broadcom Datasheet (Fig.10) for HLMP-Pxxx Series Subminiature LED Lamps
But those challenges are good things: most tutorial videos on youTube use ‘perfect datasets’ to illustrate concepts. Data from real-world sensors is never that clean, in fact the biggest challenge for educators is finding systems that are ‘constrained enough’ that the experiment will work, but ‘messy enough’ that students develop some data-wrangling chops. Many beginners are unaware of the danger of trusting R-squared values without understanding the physical & temporal limitations of the system: (you may want to expand this video to full screen for better viewing)
A note about the graphs shown below: I’m lucky to get one clear day per week my location, and the search for ‘the best’ LED light sensor will continue through the summer. I will update these plots with ‘cleaner’ runs as more data becomes available.
The metal reflecting cup around the diode is an unavoidable source of error in this system:
Reflectors cause convergence leading to complex dispersion angle plots (blue) when compared to a Lambertian cosign response (purple)
The curve will also be affected by the shape and volume of the encapsulation. Some LED suppliers provide photometric files in addition to FWHM plots for their LEDs. Of course at the hobbyists level just finding datasheets is challenging so it’s usually easier to just take some photos of the LED against a dark grey card.
IESviewerfeatures a rendering tool that can be used to show the spread & intensity of light emitted using photometric files from the manufacturer.
I could not find any information for the cheap eBay parts I’m using, so I decided to start with a 5050 LED with very little lens material over the LED:
Both sensors are suspended on the inside of the logger housing with transparent Gorilla-brand mounting tape. Orange lines highlight areas where my deployment location suffers from unavoidable interference with the calibration, The light is reduced by passing through both the HDPE of housing lid & a glass window.
The 5050 response crosses the Lambertian curve several times but the pattern still broadly follows the reflector cup diagram: the LED response shows a noon-time ‘deficit’ relative to the brighter ‘shoulders’ at midmorning & midafternoon.
The logger was suspended in a south facing skylight window during these tests. Window frame shadow crossing events produce error spikes in opposite directions at ~6:30 am & pm, while wind-driven tree leaf shadows can produce errors in both directions from about 3:00 to 6:65 pm depending on whether the BH1750 or the LED is temporarily shaded. This was the least compromised location I could find in my urban environment.
Now lets look at a clear 5mm RGB led:
After omitting the shadow-cross events (orange circles), the 5mm clear LED has large % errors due to strong focusing of the lens when the sun is directly above the emitter. This LED would make a terrible ambient light sensor, but the curves are so well defined that with a little work it could be used to determine the angle of the sun as it progresses across the sky without any moving parts.
This non-diffused pattern is predicted by Figure 10 in the Broadcom datasheet, with the tight dispersion angle of lens producing a strong central ‘hot spot’. The overall pattern is inverted relative to the 5050 (which is primarily just the metal reflector cup) although the effect of the lens is much stronger. Adding small glass particles to the epoxy will diffuse the light, reducing the ‘focusing power’ of that lens:
5mm diffused round RGB vs BH1750 lux. Outside areas with external interference the %RE is ±12%
The diffused 5mm response could be seen as an ‘intermediate mix’ of the 5050 & CLEAR led response curves. We can modify the response by sanding the top of the LED flat:
5mm diffused LED with lens sanded off. Morning was overcast on this day till about 10am, with full sun after that. This eliminated the expected 7AM ‘shadow crossing’ error, however the change in lighting conditions also upset the symmetry of the overall response in terms of the trendline fit.
Removing the lens returns to a pattern similar to the 5050 – dominated by the effect of the metal reflector. So the key to making this calibration work will be finding a combination of lens & diffuser that brings the LED response closer to the BH1750:
10mm diffused LED vs BH1750 lux. The overall shape & %error range is similar to the 5mm diffused but the slopes are reduced because the lens is less aggressive & the diffusing epoxy is thicker.10mm diffused LED covered with 2 thin sheets of PTFE over dome. The two layers of plumbers tape are applied perpendicular to each other and held in place with clear heat shrink.
PTFE tape is such a good diffusing material that it has disrupted the smooth refraction surface of the lens – essentially returning us to the 5050 pattern we saw with the physical removal of the lens from the 5mm led.
10 mm diffused LED with top sanded flat & two crossing layers of PTFE tape to provide a ‘diffusely reflecting’ surface -> one of the requirements for Lambert’s cosine law
Finally we have a combination where the errors no longer show a clearly defined structure, with noise randomly distributed around zero. We still have ±10% cloud-noise but that is related to the time delta between the reference readings and the LED reading – so data from the LED alone will be cleaner. This two step modification will turn a garden variety LED into a reasonable ambient light sensor and the PTFE tape is thin enough that the LED is still useable as a status indicator.
Why is the LED a power law sensor?
Power laws are common in nature, arising when a relationship is controlled by surface area to volume ratios. As near as I understand it; when absorbed photons generate electron-hole pairs in the diode, only those pairs generated in the depletion region, or very close to it, have a chance to contribute to the discharge current, because there is an electric field present to separate the two charge carriers. In a reverse biased p-n junction, the thickness of this depletion region is proportional to the square root of the bias voltage. So the volume of diode material that can ‘catch photons’ is proportional to the voltage we initially placed across the diode – but this voltage falls as each captured photon reduces the capacitive charge stored at ‘surfaces’ of the diode. So the active volume gets smaller but the surface area is left relatively unchanged. I’m sure the low level details are more complicated than that, and power law patterns arise in so many different systems that it might be something entirely different (?)
Two I2C 0.96″ OLED screens used simultaneously. Left is split Yellow: 128×16 pixels & Blue: 128×48 pixels while the right screen is mono-white. TheCODE ON GITHUBdrives each screen in different memory modes to simplify the functions and uses internal eeprom memory to store fonts/bitmaps.
Oceanographic instruments rarely have displays, because they don’t contribute much to profilers being lowered off the side of a boat on long cables. And most of our instruments spend months-to-years in underground darkness, far from any observer. But a few years ago we built a hand-held flow sensor that needed to provide operator feedback while the diver hunted for the peak discharge point of an underwater spring. That prototype was our first compelling use-case for a display screen, and we used cheap Nokia 5110 LCDs because they were large enough to see in murky water. While driver libraries were plentiful, I realized that Julian Ilett’s shift-out method could pull font maps from the Arduino’s internal eeprom rather than progmem. This let us add those SPI screens to a codebase that was already near the memory limits of a 328p.
Modifications done to the second display:The only thing you must do for dual screens is change the bus address. Here I’ve also removed the redundant I2C pullups (R6&R7, 4k7) and bridged out the 662k regulator which isn’t needed on a 3.3v system. These changes usually bring sleep-mode current on 0.96″ OLEDs to about 2μA per screen. (or ~5μA ea. with the 662k regulator left in place)
Now it’s time to work on the next gen, and the 0.96″ OLEDs that we initially ignored have dropped to ~$2.50 each. Popular graphic libraries for these displays (Adafruit, U8G2, etc) provide more options than a swiss army knife but require similarly prodigious system resources. Hackaday highlighted the work of David Johnson-Davies who’s been using these screens with ATtiny processors where such extravagant memory allocations aren’t possible. HisTiny Function Plotterleverages the ssd1306‘s vertical access mode to plot a graph with a remarkably simple function. These OLED screens support two bus addresses, and that set me to work combining that elegant grapher with my eeprom/fonts method for a two screen combo. Updating Iletts shiftout cascade to I2C would loose some of the memory benefit, but I was already embedding wire.h in the codebuild for other sensors. It’s worth noting that David also posted a full featured plotter for the 1106, but these tiny displays are already at the limits of legibility and I didn’t want to lose any of those precious pixels. Moving the axis labels to the other screen forced me to add some leading lines so the eye could ‘jump the gap’:
My eyes had trouble with single-pixel plots so I made the line 2-pixels thick.
Histogram seems the best underwater visibility mode. Dashed lead-lines toggle off
A little repetition produces a bar graph variant.
The battery indicator at upper left uses the same function as the buffer memory progress bar in the center. Axis label dashes along the right hand side correspond to the lead lines on the next screen. My goal with this layout was ‘at-a-glance” readability. Horizontal addressing makes it easy to update these elements without refreshing the rest of the screen; so there is no frame buffer.
To drive these screens without a library it helps to understand the controllers different memory addressing modes. There are plenty of good tutorialsout there, but the gist is that you first specify a target area on screen by sending (8-pixel high) row and (single pixel wide) column ranges. These define upper left & lower right corners of a modifiable region and the ssd1306 plugs any subsequent data that gets sent into the pixels between those corner points using a horizontal or vertical flow pattern. If you send more data than the square can hold, it jumps back to the upper left starting point and continues to fill over top the previous data. In ALL memory modes: each received byte represents a vertical 8-pixel stripe of pixels , which is perfect for displaying small fonts defined in 6×8-bit blocks. For the taller characters I use a two-pass process that prints the top of the large numbers first, changes eeprom memory offset, and then does a second pass in the next row to create the bottoms of the characters. This is different from typical scaling approaches, but gives the option of creating a three-row (24 pixel high) font with basically the same method, and since the fonts are stored in eeprom there’s no real penalty for those extra bits.
In addition to text & graphs, I need a battery status indicator and some way to tell the operator when they have waited long enough to fill sampling buffers. In the first gen units I used LED pips, but similar to the way vertical addressing made the Tiny Function Plotter so clean, horizontal mode makes it very easy to generate single-row progress bars:
// Set boundary area for the progress bar Wire.beginTransmission(ssd1306_address); Wire.write(ssd1306_commandStream); Wire.write(ssd1306_SET_ADDRESSING); Wire.write(ssd1306_ADDRESSING_HORIZONTAL); Wire.write(ssd1306_SET_PAGE_RANGE); Wire.write(row); Wire.write(row); // page start / end are the same in this case Wire.write(ssd1306_SET_COLUMN_RANGE); Wire.write(barColumnMin); Wire.write(barColumnMax); // column start / end Wire.endTransmission();
//determine column for the progress bar transition int changePoint = ((percentComplete*(barColumnMax-barColumnMin))/100) +barColumnMin;
//loop through the column range, sending a ‘full’ or ’empty’ byte pattern for (int col = barColumnMin ; col < barColumnMax+1; col++) { Wire.beginTransmission(ssd1306_address); Wire.write(ssd1306_oneData); // send 1 data byte at a time to avoid wire.h limits
// full indicator = all but top pixel on, also using the ‘filled’ byte as end caps if (col<changePoint || col<=barColumnMin || col>=barColumnMax) { Wire.write(0b11111110); } else{ Wire.write(0b10000010); }// empty indicator = edge pixels only
Wire.endTransmission(); } }
I mount screensunder epoxy & acrylicfor field units but hot glue holds alignment just fine at the prototyping stage, and it can be undone later.
And just for the fun of it, I added a splash screen bitmap that’s only displayed at startup. Since the processors internal eeprom is already being used to store the font definitions, this 1024 byte graphic gets shuttled from progmem into the 4k EEprom on the RTC modules we use on our loggers. It’s important to note that I’ve wrapped the font & bitmap arrays, and the eeprom transfer steps, within a define at the start of the program:
#define runONCE_addData2EEproms —progmem functions that only need to execute once — #endif
Since data stored in eeprom is persistent, you can eliminate those functions (and the memory they require) from the compile by commenting-out that define after it’s been run. One thing to note is that the older AT24c32 eeprom on our RTC module is only rated to 100kHz, while bus coms to the OLED work well at 400kHz.
You can update the memory in these screens even if the display is sleeping.So a typical logger could send single line updates to the graph display over the course of a day. There’s a scrolling feature buried in the 1306 that would make this perpetual without any buffer variables on the main system. Color might make it possible to do this with three separate sensor streams, and the 1106 lets you read back from the screens memory, providing an interesting possibility for ancillary storage. However with Arduino migrating to newer mcu’s I think creative code tricks like that are more likely to emerge from ATtiny users in future. As a logger jockey, the first questions that comes to mind are: can I use these screens as the light source in some interesting sensor arrangement that leverages the way I could vary the output? What’s the spectra of the output? Could they be calibrated? What’s the maximum switch rate?
Screens are quite readable through the clear 3440 Plano Stowaway boxes used for ourclassroom logger. (here I’ve hot glued them into place on the lid) Avoid taking new sensor readings while information is being displayed on the OLED screens because they ‘pulse’ power to the pixels (100ms default) when information is being displayed.Each screen shown here peaks at about 20 mA (this varies with the number of pixels lit) so you could be hitting the rail with a high frequency load of about 40mA with dual screens – which could cause noise in your readings unless you buffer with some beefy caps. Also note that the screens tiny solder connections are somewhat fragile, so avoid putting hot glue on the ribbon cable.
As a final comment, the code on github is just an exploration of concepts so it’s written with readability in mind rather than efficiency. It’s just a repository of functions that will be patched into other projects when needed, and as such it will change/grow over time as I exploreotherdualscreenideas. Kris Kasprzak has a slick looking oscilloscope example(+more here) , and Larry Banks BitBang library lets you drive as many I2C screens as you have the pins for – or you could try a multiplexer on the I2C bus. These OLEDs will only get larger and cheaper over time. (& it will be a while before we see that with e-paper) An important question for our project is: which screens hold up best when subjected to pressure at depth? This was a real problem with the Nokia LCDs.
Addendum 2021-04-05:
I finally took a closer look at the noise from those cheap SSD1306 OLED screens, and was surprised to find the usual [104] & [106] decoupling combo simply weren’t up to the task. So I had to go to a 1000uF Tantalum [108] to really make a dent in it. Contrast settings reduced the current spikes somewhat, but did not change the overall picture very much.
The following are before & after graph of a start sequence from one loggers that end with 8 seconds of the logger sleeping while the latest readings get displayed on screen:
Logger current with a SSD1306 0.96″ OLED display: before & after the addition of a 1000uF Tantalum [108]
This unit was running without a regulator on 2x Lithium AA’s , and I had noticed intermittently flakey coms when the screen pixels were on. A forum search reveal people complaining about ‘audible’ noise problems in music applications, and others finding that their screens charge pump was driving the pixels near the default I2C bus frequency of 100kHz. I’ve dozens of these things lying around, and will add more to this post if testing reveals anything else interesting. The cap. pulls about 1A at startup so I’m not even sure I could put one that large on a regulated ProMini without pushing the MIC5205 into over-current shutdown.
A $1.50 soil moisture sensor: ready to deploy. My first experiments with the pulsed output hack didn’t quite work due to probe polarization at low frequencies…but I’m still noodling with it.
There’s something of a cultural divide between the OpAmp/555 crowd and Arduino users. I think that goes some way to explaining the number of crummy sensor modules in the hobbyist market: engineers probably assume that anyone playing in the Arduino sandbox can’t handle anything beyond analogRead(). So it’s not unusual to find cheap IC sensor modules with cool features simply grounded out, because how many ‘duino users could config those registers anyway – right?
Legit suppliers like Adafruit do a much better job in that regard, but our project rarely uses their sensors because they are often festooned with regulators, level shifters, and other ‘user-friendly’ elements that push the sleep current out of our power budget. Sparkfun boards usually have leaner trim, but with all the cheap PCB services available now I’m tempted to just roll my own. Thing is, I keep running into the problem that in ‘prototype quantities’ the individual sub-components often cost more than complete modules with the reflow already done – and that’s before everything gets sand-bagged with shipping charges larger than the rest of the project combined.
So we still use a lot of eBay modules – after removing the usual load of redundant pullups and those ubiquitous 662k regulators. Once you go down that rabbit hole you discover a surprising number of those cheap boards can be improved with ‘other’ alterations. The reward can be substantial, with our low power RTC mod bringing $1 DS3231 modules from ~0.1 mA down to less than 3µA sleep current – which is quite useful if you want to power the entire logger from a coin cell.
Schematic before conversion: (regulator not shown). The output frequency is controlled by the time constant when charging/discharging C3 through R2/R3. Gadget Reboot gives a good overview of how the basic configuration works, feeding the RC filtered 555 output into a simple peak detector. Older NE555 based probes run at 370khz but the V1.2 probes with the TLC555 run at a higher 1.5Mhz frequency with a 34% duty cycle. It’s worth noting that dissolved salts, etc affect soil moisture readings until you reach the 20-30MHz range.
These sensors use coplanartraces to filter high frequency output from a 555 oscillator, but you’re lucky to see a range of more than 400 raw counts reading that filtered output with Arduino’s 10-bit ADC. On 3.3v systems, you can remove the regulator and combine the sensor with an ADS1115 for 15-bit resolution. I found you can’t push that combination past the ±4.096V gain setting or the 1115’s low input impedance starts draining the output capacitor. But that still gives you a working range of several thousand counts as long as you remember that the ADS1115 uses an internal vRef – so you also need to monitor the voltage supplied to the sensor to correct for battery variation. Trying to read the sensors output with my EX330 when the sensor was running brought the output voltage down by about 0.33v and it’s got 10MΩ internal impedance similar to the ADS – so a cheaper voltmeter will knock the output from this sensor around even more. Differential readings can reduce this problem because that doubles the ADC’s impedance.
Analog mode works well with long cable runs and piping that through an ADS1115 lets you communicate with three soil sensors over I2C if you are low on ports. And up to four of those $1 ADC could be hung off the same bus – though I’ve learned the hard way not to put too many sensors on one logger because it risks more data loss if you have a point failure due to battery leaks, critters, or vandalism. It only takes a couple of hours to build another logger for each set anyway.
3.3v ProMini ADC readings from an analog soil moisture sensor at ~8cm depth (vertical insertion) with DS18b20 temp from the same depth. We ran several of these sensors in our back yard this summer. This segment shows daily drawdown by vegetation during the hottest/driest month of the year – followed by several rainfall events starting on 9/4 . This logger was running unregulated from 2x lithium AA cells and the regulator was also removed from the soil sensor. Aref moves in step with the supply voltage so we didn’t have interference from battery variation. C5&6 were removed from the board and the sensor was powered from a digital pin with 8 seconds to stabilize ( probably more time than needed). This humble sensor was less affected by daily temperature cycles than I was expecting but if you use ‘field capacity’ as your starting point the delta was only ~200 raw ADC counts. I manually watered the garden on 8/23 & 8/24 to keep the flowers from dying; but it was only a brief sprinkle. We have 6 months of continuous operation on these probes so far but with only that thin ink-mask protecting the copper traces I’d be surprised if they go more than a year. Even with epoxy encapsulating the 555 circuit, small rocks could easily scratch the probe surface on insertion leading to accelerated corrosion.
These things are cheap enough that I’ve started noodling around with them for other tasks likemeasuring water level. With a water/air dielectric difference of about 80:1 even unmodified soil probes do that job well if you put in a little calibration time. When powered at 3.3v, the default config outputs a range of ~3.0v dry to 1.5v fully submerged in water. So Arduino’s 10-bit ADC gives you decent resolution over the probes 10cm length. And since the TL555 doesn’t draw more than 5mA (after settling), you can supply it from a digital I/O pin to save power – provided you give the sensor a couple of seconds to charge the output capacitor after power on. Adding 120-150Ω in series to throttle capacitor inrush still leaves you with ~1.2v air/water delta. The sensors I tested stabilize at power on after ~1 second but take more than 35 seconds to ‘discharge’ down if you suddenly move the probe from air to water – this behavior indicates the boards are missing the R4 ground connection. (Note: After checking batches from MANY different suppliers I’ve now started adding a 1Meg ohm resistor across the output of any sensors that output ~95% of their supply voltage in ‘free air’. Sensors with the 1meg already connected the way it’s supposed to be on the board usually output ~85% of their supply voltage with the sensor in air & about 35% of their supply when completely submerged in water.)
A quick search revealsplenty of approaches to making the sensor ready for the real world, but we’ve had success hardening different types of sensor modules with a epoxy + heat-shrink method that lets you inspect the circuits over time:
Cable & Epoxy Mounting: (click images to enlarge)
Clean the sensor with 90% isopropyl alcohol. An old USB or phone cable works for light – duty sensor applications. Here I’ve combined two of the four wires to carry the output.
3/4″(18mm) to 1″(24mm), 2:1 heat-shrink forms a container around the circuits, with smaller adhesive lined 3:1 at the cable to provide a seal to hold the liquid epoxy.
Only fill to about 15% of the volume with epoxy. Here I’m using Loctite E30-CL. This epoxy takes 24 hours to fully cure.
GENTLEheating compresses the tubing from the bottom up. This forces the epoxy over the components
Before finishing, use a cotton swab to seal the edges of the probe with the epoxy. Cut PCBs can absorb several % water if edges are left exposed.
Complete the heating over a rubbish bin to catch the overflow & wipe the front surface of the probe so that you don’t have any epoxy on the sensing surfaces.
I generally use ourClassroom loggerfor backyard experiments.You want at least 2-3 monitoring locations due to the spatial variability of infiltration pathways, and then 2-3 sampling depths at each spot for a completeprofile.Agricultural applications usually install two sensors, with one shallow sensor at 25-30% of the root zone depth and one deep at 65-80% of the root zone depth. But research projects install up to one sensor every 10cm through the soil profile. If you are doing that then you might want toinstall the sensor boards parallel to the ground surface and only power them one at a time so they don’t interfere with each other.
If the distance between interdigital coplanar electrodes is comparable to the smallest dimension of each electrode, then ‘fringe fields’ are significant. As a rule of thumb, the field usually has a penetration distance of between 0.5-1x the distance between the centers of the electrodes. I found several papers modeling that as one third of the width of the electrode plus the gap between electrodes: EPD ≈ (W + G)/3. In this case I estimate the useful sensing distance is less, say 3-6 millimeters from the probes surface, so you have to take care there are no air gaps near the sensor surface during the calibration, and when you deploy. Agriculturalsensors are often placed in an augured hole filled with a screen filtered soil slurry to avoid air bubbles. One of the biggest challenges is that even after a good deployment, the soil dries out and ‘break away’ from contact with the surface on your sensor. This will give you grief with almost all of the soil measuring sensors on the market. (with the possible exception of neutron probes) Another issue with this method is drilling the hole necessarily cuts any moisture pulling roots away from the probe, creating offsets relative to the root-permeated soil nearby.
The story gets more complicated with respect to growing things because the pores of a sandy soil might provide more plant available water than absorbent clay soils (see: matric potential) So it’s worth taking the time to determine the texture of your soil, and you have consider the root depth of your crop. (typically 6-24 inches) With all the confounding factors, soil sensors are often used in the context of ‘relative’ boundaries by setting an arbitrary upper threshold for the instrument’s raw output at field capacity(~ two days after a rain event) and the lower thresholdwhenplantwilting is observed. Soil moisture sensors also need to be placed in a location that receives a similar amount of sunlight to account for evapotranspiration. Then there’s all the factors related to your sampling technique.
Using ‘field capacity’ as our upper limit might make it possible to do a basic three -point calibration of your sensors: Dig your trench and take ‘in situ’ measurements with your probe before you disturb the rest of the soil. Extract two samples with a tubular ‘cutter’ who’s volume is large enough to cover your sensor and weigh them(you can back-calculate the gravimetric% moisture for those in-situ readings later from the sample you dry out ). Dry one completely in an oven and the raise the other one to field capacity by adding water and then letting it drain & stabilize for a day or so in a 100% humidity chamber. (a big zip-lock bag?) The wet sample must be allowed to drain under gravity until water is no longer actively dripping from it. Then weigh both samples again & take readings with your sensor embedded in the wet & dry soil samples packed back into their original sampling volume. The tricky part is that you probably need to use a metal tube for cutting your sample (and for the oven drying) but you don’t want metal near a capacitance probe later on – so you would want to transfer your soil plugs into a PVC pipe with the same internal volume using some kind of plunger before the final measurements. The idea is you want to keep the relative ‘density’ of the sample similar to the original soil insitu.
The Hack: (click images to enlarge)
After testing some ‘naked’ boards ( where I had removed all the components) I realized that the bare traces fell within a workable capacitance range for the same astable configuration the timer was already configured for:
Get the regulated boards with the CMOS TLC555 chip rather than the NE555. 3.3v operation is out-of-spec for the NE and 1/2 the NE’s I’ve tried didn’t even work in their default analog config at 5v. (& the 3.3v 662k reg is below spec if your supply falls below ~3.4v)
I remove the regulator & bridge the Vin to Vout pads. If you are going to pin-powerthe sensor you also need to remove the reg caps C5 (10µF) & C6 (160nF) as their inrush current could damage your MCU. Or add ~150 limit resistor in series, but that will drop output voltage.
Note that some variants of these sensors come with the regulator already removed DO NOT BUY THESE BOARDS – they use older NE555 chipswhich won’t operate at 3.3v.
Remove the T4 diode, C3 & C4 caps and R2& R3 resistors as shown.
Move the10k R1 to the R3 pads, and the 1Meg ohm R4 to the R2 pads. When the new R2 exceeds is more than 10x the new R3 resistor you approach a 50% duty cycle on the FM output.
Bridge the R1/T4 pads closest to the 555. This patches the frequency output to AOUT. Linking the other T4 pad to C3 replaces put’s the probes capacitance in its place.
There isn’t much left after the mod. It’s important to leave the C1 bypass in place but I haven’t had problems pin-powering other sensor boards with similar caps. It’s the bare minimum you can get away with. (& add a series limiter if pin powering)
It’s worth mentioning that the 555’s notorious supply-current spikes during output transitions might give you bad reads or weird voltage spikes. I tried pin powering anyway because the TLC555 is so much better than the the old NE555s which would surely destroy any IO pin used to supply it. So I’m sailing pretty close to the wind here: it would be much wiser to just leave the original C5/6 caps in place and skip pin powering all together. Using a mosfet is the safer choice, but hey – this is a hack…right? Officially, as little as 0.1uF needs a limit resistor to protect I/O pins, but ‘unofficially’ pin-powering dodgey circuits with AVR pins is surprisingly robust.
With the probe capacitance varying from <30 pF to ~400 pF (air/water), using 1M/10K resistors on the R2&R3 pads will give you about 50 kHz output in air, and ~1.5 kHz in water. Or you could drop in your own resistor combination to tune the output. Our ProMini based loggers have a relatively slow 8MHz clock so the upper limit for measuring period intervals is about 120kHz before things get flakey. On a 16MHz board you can reach 200 kHz easily and with a faster MCU you could tweak that to even higher speeds. But at first bounce, with parts I already had, the 1M&10K combination seemed workable.
the TLC555 draws far less current and works at 3.3v. It can also be pushed to 2 MHz in astable mode, though you’d need a more advanced counter to keep up. LMC555 & TLC551 work at even lower voltages than TLC555
You have variety of options to read pulsed signals with an Arduino. We reviewed those in the ‘Adding Sensors to an Arduino Datalogger‘ but the short version of that is: Tillart’s TSL235R example works, or you could try PJRC’s FreqCount Library. Gate interval methods count many output cycles. This reduces error and lets you approach frequencies to about 1/2 your uC clock speed (depending on the interrupt handling). However at lower frequencies the precision suffers, so measuring the elapsed time during a single cycle is better. Since I’ve already switched over to using the Input Capture Unit to read thermistors, I started with Reply #12, on Nick’s Timers & Counters page.This works with these hacked soil sensors, though it will occasionally throw a spurious High/Low outlier with those rounded trapezoidal pulses. Putting a few repeats through Paul Badgers digital smooth filter crops away those glitch reads and median filters are also good for single spike suppression. Denes Sene’s Simple Kalman filter is also fun to play with, though in this case that’s killing flies with a sledge hammer.
As usual, I pretty much ignored all the calibration homework and simply stuck the thing in the ground for a couple of weeks to see what I’d get. Which was fortunate in a way, because the sensor developed a problem which I probably would have missed during short calibration runs:
Probe Frequency [Hz] (left axis) at 5cm soil depth. Rain event on 9/5 reset the probe to normal for a few days.
The daily thermal wobble was expected, but the probe repeatedly displayed an upward bend which did not match the daily ‘stair-step’ drying cycles seen with other nearby soil sensors. Were we slowly pulling the ions out of the soil matrix? Rain events would reset back to normal behavior, but eventually the rising curve would happen again. Perhaps that was because the water was leaving, but the ions were being held on the probe surface? When I looked into the frequency of commercial sensors, they were running at least 100 kHz, and most were in the 10’s of MHz range. Some of them automatically increased frequency with increasing conductance specifically to avoid this type of problem.
While the heat shrink/epoxy methodis our gold standard for sensor encapsulation, adhesive lined 3:1 heat-shrink can do a reasonable job on these sensors if you make sure the surfaces are super-clean with isopropyl alcohol & take time to carefully push out any air bubbles. (use gloves so you don’t burn your fingers!) A third alternative is to use hot glue inside regular heat shrink tubing – squashing it into full contact with the circuits while the glue is still warm & pliable. You still have to treat the edges of the PCB, but that can also be done with nail polish. While much faster to prepare: the heat-shrink (shown in the photo above) & hot glue methods will‘pull away’ from the smooth sensor surface after about 1 year in service – epoxy encapsulation lasts for the lifetime of the probe.
I also tried to measure the electrical conductivity of solutions with this hacked probe. But without polarity reversals, polarization again became a limiting problem – and this only gets worse if you increase resistor values to resolve more detail with slower 555 pulses. So at low freshwater conductivities, where polarization is negligible, the probe works ‘ok’ as a low-resolution EC sensor if you take your readings quickly & then de-power the probe for a long rest period. (while stirring the heck out of it..? ) However the readings ‘plateau’ as you approach 10 mS/cm – so it’s not much use in the kind of brackish coastal environments we play in.
At it’s heart, this is just a variation of standard RC rise-time methods with the 555 converting that into pulsed output. While my attempts with the 1M&10K pair didn’t deliver much, the parrot ain’t dead yet. Changing that R2/R3 resistor combination to boost frequencies to much higher frequencies when the probe is in dry soil might reduce that polarization problem. Or, with a fixed capacitor instead of the probe I could try replacing a resistor with some plaster of Paris & carbon rods for a matricpotential sensor with pulsed output. Essentially using the circuit as a cheap resistance to frequency converter.
And then there’s all the other capacitive sensors that I could jumper onto the C3 pads. At that point I might as well get out the hack-saw, because I’m really just using the top section of the board as a TLC555 breakout that I can mount under epoxy. When you consider how many of us are working with 3.3v MCUs, a low voltage 555 module should have been on the market already. And while I’m thinking about that – where are all the 3v op-amp boards, with each stage jumper-able into several ‘standard recipes’, and linkable to the next by solder pads or holes along the outer edge? I’m envisioning an SMD quad in the middle, and a few layers of elegantly designed traces & well labeled through-holes for population with resistors & caps. Even at eBay prices, you’d think the mark-up would have made those common as dirt a long time ago . . .
And finally a cool looking antique gauge hack . It’s too bad they didn’t use a capacitance sensor with it – those two prong resistive soil sensors usually corrode within a few weeks.
The analog soil sensor from the start of the post is still chugging away, but (with the exception of a few rainy days) after leaf-fall the soil sensor has basically leveled out at ‘field capacity’
Two heavy rain events in this record. With no more evapotranspiration, the soil stays saturated between them.
So there won’t be much going on over winter but, I will be leaving most of the loggers outside anyway. We have some northern projects waiting in the wings and I want to see how well the unregulated student builds stand up to the cold. Be interesting to see if the soil sensor can withstand being frozen right into the soil. I expect those events will register as ‘extremely dry’ due to the reduced dielectric of ice.
Addendum 2021-05-02
Lately it’s been challenging to get the 3.3v regulated versions of theses soil sensors. Vendors on eBay have started listing 3.3-5v compatibility, and even posting the schematic showing the regulated TLC555 circuit, and then shipping the ‘5v only’ NE555 sensors. This kind of bait & switch is common with low end stuff from China and the problem is your shipping charge from the US back to Shenzhen usually costs more than the items. So I just started replacing the NE555 chips myself, since TLC555’s are only about 50¢ each:
A few moments with a heat gun easily lifts the old NE555 chip
Add a good amount of solder to the legs of the replacement chip
With a block to steady your hand, tack down opposite diagonal corners first
The TLC555 is pin compatible, and runs down to 2v. You need that range because pin powering the sensor through a 150 ohm limit resistor (average ~5mA draw) leaves only 2.65v to drive the sensor on a 3.3v system. The lost voltage necessarily compresses the output range too.
The Touchstone TS3002 timer IC could be another interesting 555 replacement option as it’s spec’d to draw only 1μA from a 1.8-V supply. A drain that low brings these soil sensors within the power budget of our 2-Part falcon tube loggers that run for a year on a coin cell.
Addendum 2023-03-18
From an electronics point of view, putting stuff in the ground isn’t really much different than putting it underwater. So people reading this post will probably some more useful information in our post about Waterproofing Electronics Projects.
Testing configuration w differential reads from a piezo triggering burst-samples with a $1 ADS1115 module.
The 16-bit ADS1115 has a programmable amplifier at the front end, with the highest gain setting providing a range of +/- 0.256 v and a resolution of about 8 micro volts. But readers of this blog know you can already approach 14-16 bit sensitivity levels with Arduino’s ADC by oversamplingwith lower Arefs & scaled ranges. PA1EJO demonstrated a ADS1115 / thermistor combination which resolved 5 milli Kelvin, but we reach that resolution on our NTC’s using the ICU peripheral with no ADC at all. The beauty of time-based methods is that they scale gracefully with sensors that change over a huge range of values. So why am I noodling around with this ADC module?
The primary attraction is this ADC has differential inputs. This is especially useful with Wheatstone bridge arrangements. A typical combination would be a two element varying bridge, and an inexpensive voltage reference like the LM4040, or the TL431. Adding the second sensor doubles the voltage swing, and the ADC’s -32768 to +32767 raw output fits perfectly into Arduino’s 16-bit integer variables. It’s also worth noting that unlike most ADC’s – the‘volts per bit’ via the gain settings are independent of the rail voltage supplying the chip. This means that the ADS1115 can measure it’s own supply using that internal reference without a divider. The drawback of that is I have to set full-scale as +/-4.096V on my 3.3v loggers, so the ADC only uses ~80% of the bit-range.
Read the difference between A0 and A1 as a differential input, and read A2 as a single-ended input. That will give you an ‘almost‘ ratio-metric measurement because you record every voltage affecting the output. (although since the supply is not Aref like it would be in a regular ADC – the uncorrelated noise in the LM4040 excitation voltage ‘between’ those two readings will not get corrected) I treat the ref. resistors as ‘perfect’ in my calculations, which forces their tempco errors into the thermistor constants during calibration.
The diodes shunt any voltages that exceed their vF, protecting the ADC from spikes beyond the +/-0.256v range at high gain. AND the leakage that Shottky’s are known for bleeds away residual charge on the piezo, preventing drift from the 1.65v bias point. All the data presented in this post used this circuit. Other diodes, or even garden variety LED’s, could also be used to clip the signal at different voltages. Most have far less leakage than Shottkys, so you might need to add a large value bleed resistor. If you do end up with an offset, an old trick is to run a very aggressive low pass filter on your readings to obtain the offset and then remove it by subtraction.
Piezos can also be read with bridge arrangements if they are physically connected with alternating polarities, but that’s not usually the case for active sensors. I have a new project in the works where the sensor will only generate +/- 5mv, and I’d like to see if capturing signals that small is even possible with the ADS1115. To reveal where the weak points are I’ll test it with a single piezo disk reading at the highest gain and fastest sample rates. At this sensitivity a 5mv swing will only produce ~640 counts. With my signal only covering 2% of the bit range, I’m hoping that the differential readings will compensate (?) for noise on the rails. The data sheet warns about low input impedance (710kΩ) but I don’t think that will affect this test. Another significant limitation of the ADS1115 is that, like the Arduino driving it, no voltages below GND, or above Vcc are allowed on any input. Bridge arrangements automatically bias to the mid-point, so while the tie-in points might go ‘negative’ relative to each other, they are still positive relative to GND on the ADC. For single sensor applications with +/- output, you need to provide that biasing with a couple of resistors.
An often overlooked feature of the ADS is the programmable comparator which can set threshold alarms on the ALRT/RDY output. Most loggers operate with fixed interval sampling, but this makes it difficult to measure things like airborne exposure peaks for chemical vapors; even with short intervals. Sensor-triggered sampling can also save battery power by letting you sleep the CPU – especially when you are monitoring environments that are only ‘active’ during certain seasons, or with rainfall events. The different comparator modes on the ADS1115 also offer some interesting possibilities for system control.
Driving the ADS1115:
This chip’s been around for a long time, so there are several libraries to choose from. And, as usual, many of them don’t support the features that a project building loggers would be most interested in. I suspect this is because wireless coms use such a prodigious amount of power that few in the IOT crowd would bother writing efficient code for chips that already auto-sleep. The Adafruit library even inserts 8ms delay statements – wasting cpu power and throttling the sample rate to 125sps. Rowberg’s I2Cdevlib does a better job with setConversionReadyPinMode() functions. But his code example only polls the I/O status, rather than using the hardware interrupts available on the same pin.
Perhaps the easiest starting point for beginners is the ADS1115 lite library. This is a stripped down version of Adafruit’s lib. but Myers has removed the explicit delays and replaced them with a do-while loop which polls the Operational Status bit to see when the ADC has a new reading in the output registers. This minimalist approach uses only two main functions:
triggerConversion() – Sets config register bits & then writes that register back to the sensor (which automatically starts a single-shot reading) The ADS1115 auto-sleeps after the reading.
getConversion() – A do-while loop forces the CPU to continuously check the Operational Status bit. That bit change breaks the loop, and getConversion then reads the 16-bit output register.
With this single-shot approach, a short for-loop takes burst of readings:
setMux(ADS1115_REG_CONFIG_MUX_DIFF_0_1); // uses #define statements setSampleRate(ADS1115_REG_CONFIG_DR_860SPS); // for the config bitmasks from setGain(ADS1115_REG_CONFIG_PGA_0_256V); // the original Adafruit library
for (int i = 0; i < numberOfSamples ; i++) { // I usually read 500 samples
triggerConversion(); // during testing, which fills the
ADS1115raw[i] = getConversion(); // the serial plotter window nicely }
In single shot mode, you have to re-write the configuration register every time you want a reading:
Myers triggerConversion() function sets the config register with a common Bitwise-OR method. I’m going use this as a starting point, tweaking a few things for better readability and including my standard a 16-bit register functions so this page doesn’t scroll on forever. (also note that in addition to my typos, wordpress inserts a ton of invisible cruft characters that will mess with the IDE – so don’t copy/paste directly from this post…)
uint16_tconfig = 0; // All bits set to 0 config |= _rate; config |= _gain; config |= _mux; // sets matching bits to 1 bitSet(config, 8);// MODE set to 1 = single-shot & power-down (the default) bitSet(config, 15);// setting oneshot bit starts a conversion, bit goes low when done i2c_write16bitRegister(ADS1115address, ADS1115_REG_POINTER_CONFIG, config);
Sensor Triggered Sampling:
Let’s use the comparator to start each burst of readings in response to me tapping the surface of the desk that the piezo sensor is resting on. Although Myers polling method doesn’t use the ADC’s ALERT/RDY output, we are already set up for triggered bursts because all the comparator control bits were zeroed with config = 0 at the start.
COMP_MODE: 0 => Traditional hysteresis mode (On above Hi_thresh, Off below Lo_thresh)
COMP_POL: 0 => ALERT active brings the pin LOW
COMP_LAT: 0 => NON latching
COMP_QUE: 00 => ALERT after one reading above Hi_thresh (01=2reads, 10=4reads)
With this as the starting point all you have to do to initiate comparator threshold alerts is load some non-default trigger values into the Hi_thresh & Lo_thresh registers. Hi_thresh must be greater than Lo_thresh, and you have to use ‘2s complement’ values
// set Lo_threshold register (0x02) to ‘2’s complement’ equivalent of decimal 250:
i2c_write16bitRegister(ADS1115address, 0x02, 0x00FA); // set Hi_threshold register (0x03) to equivalent of decimal 300:
i2c_write16bitRegister(ADS1115address, 0x03, 0x012C);
Now we need a way to make the processor respond the ADC’s alert. If I wanted to use the same power wasting methods you find in most sensor libraries, I’d connect ALRT/RDY from the ADC to any digital input pin, and poll the pin until it goes low:
void pollAlertReadyPin() { // this code will time out eventually
for (uint32_t i = 0; i<100000; i++) {
if (!digitalRead(AlertReadyPin)) return; }
Serial.println(“Timeout waiting for AlertReadyPin, it’s stuck high!”);
}
This might be OK for an IOT sensor hanging off of a wall-wart. But for logging applications a hardware interrupt based approach lets you save power by sleeping the processor until the trigger event happens:
void INT1pin_triggered() {
INT1_Flag = true;
}
// – – – – – – later on – – – – – – – – in the main loop – – – – – – – – – – –
uint16_tconfig = 0; // All bits set to 0 config |= ADS1115_REG_CONFIG_MUX_DIFF_0_1 ; // using #defines from Adafruit lib config |= ADS1115_REG_CONFIG_DR_475SPS ; config |= ADS1115_REG_CONFIG_PGA_0_256V ; bitClear(config, 8); // MODE set to zero = continuous sampling– redundant here i2c_write16bitRegister(ADS1115address, ADS1115_REG_POINTER_CONFIG, config);
i2c_write16bitRegister(ADS1115address, 0x02, 0x00FA); // set Lo_thresh = 250
i2c_write16bitRegister(ADS1115address, 0x03, 0x012C); // set Hi_thresh = 300
// ALRT/RDY output from ADC is connected to the hardware INT1 pin
set_sleep_mode(SLEEP_MODE_PWR_DOWN); bitSet(EIFR,INTF1); // clear pre-existing system flags on INT1 pin noInterrupts (); attachInterrupt(1,INT1pin_triggered,FALLING); INT1_Flag = false; // reset the flag before the do loop sleep_enable(); do { // loop keeps the processor asleep until INT1pin_Flag = true interrupts (); sleep_cpu (); noInterrupts (); } while ( !INT1_Flag ); detachInterrupt(1);
sleep_disable (); interrupts ();
// after waking, reset the threshold registers to their defaults to disable the ALERTs
i2c_write16bitRegister(ADS1115address,0x02,0x8000);// Lo_thresh default = 8000h
i2c_write16bitRegister(ADS1115address,0x03,0x7FFF);// Hi_thresh default = 7FFFh
// now gather single-shot readings as before for (int i = 0; i < numberOfSamples ; i++) {
triggerConversion(); // resets the config register to single shot mode every cycle
ADS1115raw[i] = getConversion(); }
With 200 / 300 set as thresholds, tapping the desk beside the piezo produced:
RAW ADC output vs Sample Number: Threshold triggered: A0-A1 differential, 475SPS, 16x PGA,500 samples
With readings above 2000, I was hitting the desk a bit too hard for that 5mv target range. And 475 samples-per-second is not quite fast enough to show piezo sensor behavior. Zooming in also shows that the ADC was aliasing through some background signal:
RAW ADC output vs Sample Number: ADS1115 & piezo sensor, A0-A1 differential, 475SPS, 16x PGA, 500 samples
That’s a classic ‘mains hum’ problem. Annoying, but from a research perspective the loss of information from the start of the event was is a more of an issue: What happens if we only get one chance to record our event?
Pretriggered acquisition:
To capture infrequent events, I need to start acquiring data before the reference trigger. And since the waiting period is unknown, those readings need to go into a circular buffer that wraps around and stores each new sample over the oldest one in memory. With this approach the trigger event actually serves to stop the acquisition rather than to start it. And you want to do this gradually, so the samples in the array represent a “slice-in-time” covering the entire event.
The real trick is to sleep the main processor as much as possible during the pre-fetch period. In continuous conversion mode the ADS1115 can alert completion with an 8 msec pulse, but with only one alarm output, the ‘threshold detection’ will have to be done in software:
void INT1pin_triggered() {
INT1_Flag = true;
}
// – – – – – – – – in the main loop – – – – – – – – – – –
uint16_tconfig = 0; // All bits set to 0 config |= ADS1115_REG_CONFIG_MUX_DIFF_0_1 ; // #defines from Adafruit lib config |= ADS1115_REG_CONFIG_DR_860SPS ; // the max speed config |= ADS1115_REG_CONFIG_PGA_0_256V ; // maximum gain bitClear(config, 8); // MODE set to zero = continuous sampling– redundant i2c_write16bitRegister(ADS1115address, ADS1115_REG_POINTER_CONFIG, config);
// continuous mode 8ms ‘pulses’ require these specific values in the threshold registers:
i2c_write16bitRegister(ADS1115address, 0x02, 0x0000); //Lo_thresh MS bit must be 0
i2c_write16bitRegister(ADS1115address, 0x03, 0x8000); //Hi_thresh MS bit must be 1
// ALRT/RDY output from ADC is connected to the hardware INT1 pin
// housekeeping variables for sampling loop control:
booltriggerHasHappened =false; intcountdown = numberOfSamples/2; // sets # of samples taken AFTER trigger event intcountup = 0; // If triggered before the array is 1/2 full, countup used to fill remaining int arrayPointer =0; // tracks where we are in the circular buffer
set_sleep_mode(SLEEP_MODE_PWR_DOWN); // now loop forever till trigger event starts the countdown // then collect numberOfSamples/2 more readings while( countdown>0 ){ bitSet(EIFR,INTF1); // clear any pre-existing system flags on INT1 pin
noInterrupts (); attachInterrupt(1,INT1pin_triggered,FALLING); INT1_Flag = false; // reset the flag before the do loop sleep_enable(); do // short sleeps while waiting for each ADC reading { interrupts (); sleep_cpu (); noInterrupts (); } while ( !INT1_Flag ); detachInterrupt(1); sleep_disable (); interrupts ();
// load one reading into the ADS1115raw array
ADS1115raw[arrayPointer] =
i2c_read16bitRegister (ADS1115address , ADS1115_REG_POINTER_CONVERT);
// here I’m using 200 as the threshold reading to start the countdown
if (( ADS1115raw [ arrayPointer ] > 200) && ( !triggerHasHappened ) ){
triggerHasHappened = true; //only needs to occur once if (countup < (numberOfSamples/2)){ // trigger happened b4 array was 1/2 full
countdown=countdown+((numberOfSamples/2)-countup); // increases countdown by the difference so you always capture numberOfSamples
}
}
if ( triggerHasHappened ){ //then only fill the last half of the array
countdown = countdown-1; // limits the number of new readings
}
// advance arrayPointer with ring-buffer MODULUS formula:
// automatically goes back to zero when the pointer reaches the end of the array arrayPointer = (arrayPointer + 1) % numberOfSamples;
countup=countup+1;
} // =====end of while ( countdown>0 ) loop====== sleep_disable ();
// reset the registers to startup defaults to stop ADC continuous running
i2c_write16bitRegister(ADS1115address,ADS1115_REG_POINTER_CONFIG,0x8583); //sets: ±2.048V,128SPS,NoCOMParator,AIN0&AIN1,Trad,NoLAT,NoALERTs,ActiveLOW
i2c_write16bitRegister(ADS1115address,0x02,0x8000); // Lo_thresh default
i2c_write16bitRegister(ADS1115address,0x03,0x7FFF); // Hi_thresh default //read the ADC output registers to remove any residual ALERT/RDY latches
i2c_read16bitRegister(ADS1115address,ADS1115_REG_POINTER_CONVERT);
Setting countdown = numberOfSamples/2centers the event in the array. (although 1/3 to 2/3 split might be better?) A tap on the desk with a pencil produced a +/- 5mv swing (~640 raw counts), and my breadboard proto circuit is picking up almost 100 counts of mains hum.
RAW ADC output vs Sample #: 860sps, 16xPGA, diff. A0-A1, USB powered from laptop
Losing 20% of my available signal range to background cruft is a problem. Adding a $10 USB isolator, reduced that by about 1/3. But the 60 Hz signal was still distorting the shape of the waveform significantly or we’d be seeing a smoother damped harmonic..
RAW ADC output vs Sample #: 860sps, 16xPGA, diff. A0-A1, with USB isolator. Hit the desk a bit to hard on this one.
My first thought was ‘This is a well behaved, repeating signal – I’ll just subtract it out’. Unfortunately 860 SPS is not a multiple of 60Hz, so simple corrections tend to pass in & out of phase with the hum – eventually making the situation worse. The misalignment means we’d need to do some serious math for the correction at 860SPS, so I’m probably not going to be implementing that filtering on an 8-bit processor. Alternatively I could go back to single shot sampling and use the processors internal timers to only request each new sample at some whole number multiple of the 60 Hz mains cycles, like for example at 600 readings a second. The maximum possible would be 840, and you’d might want to add some jitter to that.
Next I tried a run from batteries only, with the nearby electrical devices all turned off. This reduced the mains hum by ~10x relative to the USB tethered operation:
RAW ADC output vs Sample #: 860sps, 16xpga, differential A0-A1, Battery powered logger
A dramatic improvement, with the ‘pre-event’ noise falling below +/- 10 counts. Most of our field deployments are in caves and this ADC looks like it has an acceptable noise floor for work in that kind of isolated environment . But the project also has a teaching component so I’d also like to use this ADC module in classroom settings. Zooming in on that last graphs shows that working with tiny sensor signals will be challenging if we are anywhere inside a building – even if the resting state of the system looks OK:
Once the sensor is set in motion, even tiny interferences from the mains will reinforce each other before the system settles again. Even if I use internal timing control to synchronize the readings with a whole number multiple of the mains, it looks like I still won’t be able to use the ‘before’ data to fully correct the ‘after’ effects. This might be specific to way piezo sensor’s resonate, but I’ve got some homework to characterize the effect before we start building a student lab with this module.
Looking in the bright side, even with a power hungry 1284p based logger, the current draw while capturing the pre-event readings averaged less than ~450 μA for the whole system.
The path forward:
The successor to the ADS1115 is the 24-bit ADS1219 which reads up to 1000 SPS (20 Effective Bits, PGA x4). It has integrated input buffers to allow measurement of high impedance inputs & separate inputs for analog vref (true ratiometric!) and digital power/ground. This gives you more options to mitigate power supply noise, which as we’ve seen can be important for small signals. It also offers some built in 50-Hz and 60-Hz rejection, but only at slow sample rates. The ADS1115 is a delta-sigma converter so it continuously samples its inputs (oversampling @250kHz internally) which causes a transfer of charge. From the outside, this appears as a resistance which depends on the sampling rate and depends on the gain of the input stage. Higher gain for more sensitivity yields lower effective input resistance which adds in parallel to the resistance of your sensor circuit. So if you were reading the output of a voltage divider (equivalent) the ADS1115 itself would change the result. So input buffers on the ADS1219 are a welcome addition.
The low input impedance of the ADS1115 can prevent you from using the higher gain settings in differential mode unless you add an opamp/buffer to prevent the ADC from putting to much drain on the sensors output. This is really what separates the ADS from ‘Instrumentation quality ‘ components which generally have much higher input impedances.
There are other 24-bit options in the hobbyist market like the HX711 (24-bit, PGA x32,64,128 – but only x32 on the 2nd channel? ) that is commonly sold with load cells, and I’ve seen it mentioned that the SPI HX711 works with libraries written for the ADS123x series. The ADS1232(24 bit, fixed x128 gain) might be a easier option for dedicated bridge sensors, and they can be found on eBay for ~$7. One advantage of the ADS123x over the HX711 is that they have a switch that can shut off current to the sensor bridge when the ADC is in Standby or PowerDown mode. Of course then you have the problem that load cells take some time to warm up when power is applied, often taking several minutes to stabilize. You occasionally see seismometer projects using the 32-bit ADS1262, which has a sensitivity >1000x better than the 1115, but with a fairly slow sample rate.
This circuit from Gadget Reboot shows one method of obtaining programmable gain control using anX9C digital potentiometer in the opamp feedback loop. See: Part1 & Part2 The DS3502 gives you an I2C bus version with the same 10k range, though I have no idea what the long term stability of these digital pots is. And 5% tolerance is a bit grim.
But this little experiment has me wondering if for signals in the 1mV range it might be better to spend more effort amplifying rather than moving to higher resolution ADCs. If the real issues are going to be noise and drift, then those might be easier to deal with if the level is boosted first. Microphone preamps can be made from a single 2N3904 transistor and placed in front of a (200x) LM386 modules for less than 50¢ though I suspect there might be lots of distortion. A general purpose (100x) LM358 might do the job on its own, or a (1000x) INA333 or AD623 modules (with the trimpot) which can usually be had for less than $6, as can the AD8221AR. The INA129-HT gets you to 10,000x for ~$9. What I’d really like is an amplifier with the same simplicity of the ADS1115’s PGA. If anyone knows of a cheap I2C/register controlled opamp module in the hobby market price range, I’d love to hear about it.
Addendum 2020-05-24: Interrupt latency with wake from sleep
I just watched an interesting video about the sleeping the ESP32 processors and was quite surprised to find out how long (150 µS) and how variable the hardware interrupt latency is on these expressive processors. This set me down the rabbit hole to find out what the latency is on the AVR processors. On a normally running processor you enter the ISR in 23 clock cycles, which is about 1.5µS @16MHz. However if you loop through POWER_DOWN there are extra things to consider like the fact that disabling the BOD in software (just before sleep) is going to add 60 µS to your wake-up time. You also have an ‘oscillator stabilization period’ of 16k CPU cycles with a standard external oscillator. [see Sect.10.2 of the datasheet] The net result is that the Wake/Start-up time for a 8MHz Arduino is ~1.95ms. AVR’s with 16MHz clocks like the one I used for this test should have a wake-up time of less than 1ms. So I was actually cutting it close to combine full POWER_DOWN sleep & the ADS1115’s highest sampling rate. A 3.3v Pro Mini based build @8MHz would not have kept up unless I used SLEEP_MODE_IDLE to keep the main oscillator running which avoids that long stabilization delay.
Other projects using this ADC:
While I’m giving it a B rating for my current use case, this $1 ADC module is probably one of the best options once your signals get above above 10 mv. The UNO/ADS1115 combo is a ready replacement for benchtop DAQs. Especially since you can add up to four of the modules on the same bus for a multi channel capability. This build of InstESRE’s Pyranometer solders a PDB-C139 directly onto the ADS1115 module, and adds an analog TMP36 for temperature correction.
If you actually want mains signals in your data, then Open energy monitor has a project reading AC with the YHDC SCT-013-000 . Current sensors like that often make readings that are not referenced to ground so you have to use an ADC capable of differential readings. Although this project focuses getting the most out of cheap eBay modules, the ADS1115 repeatedly makes appearances alongside more pricey sensors like this DIY nitrox tester, and this rather impressive Air Quality Index (AQI) monitor from Ryan Kinnett; Those low power modules from Spec-Sensor look very interesting…
In this tutorial, a logger is built using a 3.3v Moteino MEGA with a 1284p CPU @ 16Mhz, w 4K eeprom,16K SRAM for variables & 128K program space. Considerably more than the 328’s 1K eeprom, 2K ram & 32K progmem. Also has a spare serial port for GPS/NEMA sensors.
In the 2018 paper we tried to convey that it doesn’t matter which processor you use with our system as long as it’s supported by the Arduino IDE. The ATmega family includes several CPUs more capable than the humble 328p in the Pro Mini / UNO. In fact, some have suggested that the 1284 would have been a better choice right from the start, with full code compatibility once you account for the different port / pin mapping. We built several loggers around that chip early in the project, but at the time I was mostly just leaning on the extra memory to compensate for some fairly crude programming. As my skills improved, and I stopped using bloated sensor libraries, that problem went away. So it’s been a while since I needed anything more than the 328p. But we have a new project spinning up this year that calls for burst sampling of high-bit ADC channels, and we’ll need more SRAM to juggle that data with some fairly large buffer-arrays.
Dave’s protoboard build
For those who build from scratch, Dave Cheney shows one approach to starting with the raw chip. At the opposite end of the spectrum, Stroud Research Center’s Mayfly is a good combination if you’d prefer something ready-made. But I still have a a few Moteino MEGAs lying around, and this project has always followed the ‘middle path’ to enlightenment. So we’ll use one of those boards to demonstrate how easy it is to give our classroom logger a processor upgrade:
Add Screw Terminals to the Moteino: (click images to enlarge)
20 screw terminal ports per side, OR you could solder 2.54mm blocks directly to the MEGA. But they are not as robust for multiple reconnects during prototype experiments.
The Moteno MEGA from ‘just fits’ between the rails on the Raspberry Pi adapter. The ProMini XL fits on a Nano shield – but I like having all those extra I/O pins on the MEGA to play with.
Pins on the MEGA board need to be bent ‘slightly’ to align with the header holes on the Rpi shield.
This is the basic version of the MEGA, but it’s also available with flash & radio transceiver options. LowPowerLabs also sellsthe Current Ranger – anextremely usefultoolfor checking the sleep current on my loggers. The shiny surface is due to the conformal coating we put on all our components. Clear nail polish also works once you’ve cleaned all the flux.
At the time of this writing there are a few 1284 boards on the market: the Moteino MEGA , the Megabrick, the Dwee, and the ultra compact Pro Mini XL. Other projects sporting the chip seem to pop up regularly, though fewseem to lastvery long. This is a shame because 1284 has enough juice to do single-chip emulation of early8-bitcomputers. But perhaps the chip never really caught on in Arduino-land because few beginners need that much capability. Sustained interest from dedicated makers means that some version, or at leastDIY instructions, will will be available as long as the chip is in production. If I had to, the terminal expansion board I’ve used here would let me build one of those from the raw chip & a few passives. The trick would be making templates for the boards.txt and pins_arduino.h, and finding a bootloaderthat matches the system clock.
Component Prep & Logger Assembly:
Black =GND, Purple=MISO, Brown=CLocK, Orange=MOSI, Grey=CSelect, Red=3v3
Foam tape holds the Dupont jumpers together & provides accidental contact protection on the header pins. 3.3v system lets you connect the SD card directly
Place the SD module to one side, leaving room for wire pass-through under MEGA Board. Note:a 10k pullup on the CLocK line was removed because mode0 sleeps low
Route all but the GND wire under the main board
Score the insulation at ~15mm, and ROLL THE INSULATION between your fingers to TWIST the thin 30AWG strands together.
Add a 5mm ‘hook’ to provide more wire under the screw terminal contacts
SD power is controlled by switching the GND line with a TN0702 logic level mosfet on D0 -> After <- SPI peripheral is disabled & all SPI bus lines are pulled up.
Diffused common cathode RGB w GND pin bent to identify. No limit resistor is needed if you use INPUT_PULLUP to light each color channel.
Attach the GND wires before putting the MEGA into the Plano 3440 stowaway box being use as a housing.
Here using 3xAA to supply the MCP1700 reg. on the MEGA. For unregulated builds, I use 2x Lithium AAs with flat discharge curves.
Blk(GND), Red(3.3v), White(SDA), Yellow(SCL), Blue(SQW) with Dupont pin headers added to the cascade port. 32K is not usedas our power-mod disables that output.
170 tie point breadboard completes assembly. I2C bus is tapped from the RTC modules 4-pin cascade port.
For comparison, here’s a regulated Pro-Mini based build from 2019 . In 2020 we switched the 328p builds to2xAA with no regulator.
The Moteino MEGA also makes Aref available (purple). Setting that to the internal1.1v bandgap gives you a reference voltage with thermal stability similar to a T431 provided you leave the ADC running long enough to stabilize the cap on aref & account for the internal resistance.
Now all I have is a bit of code tweaking to change the I/O in the codebase so those commands match the Moteino’s pinmap.
To complete the trim on this prototype I’ve added a 16-Bit ADS1115(I’ll be giving that ADC a serious workout soon & will post any interesting results). Pinning the add-on module vertically preserves space on that tiny breadboard. With the ADS I won’t be using the internal ADC for much other than battery tracking. However analog pins will safely tolerate mid-range voltages which cause serious leakage problems on normal digital I/O pins. This lets you use a resistor bridge for scaled ranging techniques.
With all that extra memory to play with I’ll finally get to try some graphinglibraries with that OLED – a luxury I can’t usually afford with ProMini based builds (though tiny plotter works in a pinch). And rumor has it that you can bit-bang I2C on ‘any’ GPIO pins, driving SSD1306 displays up to 10 frames per second. We have plenty of I/O lines to spare now for those experiments.
All of Atmels ‘-P’ variants have low power modes, and the logger shown here sleeps below 20μA (without the OLED, which draws ~25uA while sleeping) – that’s the about the same as I usually get with the 328p based units. The 1284 draws more runtime current (~17mA), but you can reduce that considerably by throttling the system with a prescaler. If you then accelerate the I2C bus by the same factor with TWBR, your sensors are none the wiser. In fact I’ve had no problem polling the 1115 with the system clock brought all the way down to 62.5 Khz with clock_prescale_set(clock_div_256);
I guess the last thing on my list is a name for this beast. I had been thinking of calling it the ‘MEGA pearl’, but a quick google search convinced me otherwise. So I’m open to suggestions.
Well that didn’t take long: a few have already pointed out that I missed the Microduino Core + in the ‘currently available’ list. I’m sure there’s more so I’ll add them if I hear about others with a physical footprint that’s small enough to work as a drop-in replacement with the student build. I skipped the 644p in the preamble, but it’s another good low-power option if you are looking for more program memory. The 1284 is pin compatible with the 644, ATmega16, ATmega32 and probably a lot more.
Now that I’m playing with the 1284 again, I’ll also post any interesting projects I come across using these more powerful processors, such as this acoustic impulse marker or this body fat analyzer.
Addendum 2020-05-21: Using the ADS1115 in Continuous Mode for Burst Sampling
As promised, here’s a first look at that ADS1115 ADC module. For day to day tasks, it’s already something of a work-horse for Makers. But I wanted to push it out to the max settings to see if it’s ready for some typical research level applications. The answer is a tentative yes, but only at the fastest 860 sample per second speed. And with small signals EMI is more of a problem than the limitations of the ADC itself. I’m sure that’s not news to the analog hardware hackers out there, but I’m kind of a digital kid who’s still learning as I go along.
Our LED sensor experiments lead to an interesting observation: When these ‘light-sensing’ loggers are left running overnight they still produce readings because reverse-bias ‘leakage-current’ eventually triggers the Interrupt Capture Unit (ICU) – in the absence of any light. The speed of this self-discharge depends on the ambient temperature. If you cover an rgb LED with black heat shrink, the different color channels have different rates of thermal decay:
Temp (Celsius) vs ‘Covered’ LED reverse-bias discharge time (Seconds) , Red, Blue & Green channels, generic rgb LED. The LED was encapsulated in black heat shrink tubing and was connected directly to the IO pins with no limit resistor. LED’s take a very long time to discharge compared to other diodes, so at those time scales you can capture the data with the non-ICU based timing method of simply reading the high/low pin state in a loop. We use that simpler method in the 2019 classroom logger starter code on Github.
Both voltage and temperature affect reverse current, so these measurements must start from from a stable, regulated voltage. Increasing the temperature by almost 20°C reduced the time to 20% of the low-temp value. The green channel appears to be more resistant to leakage which is surprising given that reverse bias currents are usually rated at ~1 µA for RY and ~10 µA for BGW colors. So perhaps this result says more about the volume & surface area of this particular unit than it does about LED color chemistry.
Even if I sleep the processor to save power, multi-minute readings would interfere with the other things we are recording on the Cave Pearl loggers. However, this LED-based approach has interesting applications where space is limited and temps fall within a warmer range. The idea also has a lot of potential in situations that require high levels of sensitivity, although there aren’t many of those that can wait such a long time for readings.
Checking if I could use the technique with other types of diodesled to this jeelabs post where he compares the reverse bias leakage in three common diodes at 5V:
1N4004 – a high power diode: = 1.3 nA 1N4148 – a low power diode: = 3.4 nA BAT34 – a Schottky diode: = 50 nA
He also had the realization that “the reverse current could even be used as a temperature sensor.” Small diodes have internal capacitance of a few pico-farads, so 5-50 nA will discharge them considerably faster than the LED channels I was using. In fact, reverse leakage increases so much with Schottky diodes it can cause a thermal instability issues which limit their useful reverse voltage to well below their max rating. Germanium Diodes are even more susceptible.
Add black heat-shrinkaround diodes with clear encapsulation(like the one shown here) or they will also be affected by light levels in the local environment.
It’s worth noting that most diode based temperature sensors use the change in forward voltage because that relationship is linear, with about 2mV less voltage drop for every degree increase of temperature. But chasing a few milli-volts with Arduino’s 10-bit ADC only allows a precision of ±1°C unless you add amplification, or some other trick. By comparison, leakage current can be expected to double with every 10°C increase in temperature, making higher resolutions possible with the same hardware. The trade off is using a non-linear relationship which produces variable resolution over the sensing range. And since leakage is also a byproduct of manufacturing variations you need to calibrate each diode individually. That’s a show-stopper in production environments where that time costs more than the whole device, but not so much for DIY projects which need to run-test their build for a few days anyway. We don’t usually send a logger into the field until it’s had several weeks of stable operation.
Testing a 1n5819 Schottky Diode:
Here I’m timing the leakage-discharge with Timer1 clock ticks from an 8mHz 3.3v ProMini:
Temperature (°C) vs 1n5819 Shottky Reverse-bias Leakage Discharge Time (8 mHz clock cycles) Diode connected between D7 & ICU on D8. Blue dots are Excel’s trend-line. That fit was better than I was expecting.
The Schottky discharges very quickly at room temperatures, with raw Timer1 counts of about 1300 at room temperature ( ~0.16 milliseconds) and about 100 counts of variation /°C. Counts increase as temperature falls to ~5800 ( ~0.7 ms) at 6°C, with a delta of 580 counts per degree. The curve flattens out at the lower limit of this test with raw counts about 62,000 ( ~7.7 ms) at -15°C, and a delta of 7000 counts/degree.
The timing jitter on these ICU readings ranges between 10-20 counts depending on the board (even with 4x noise reduction enabled) and this is a significant source of error when you only have a per-degree delta of 100 counts. You can over-sample the Schottky to compensate, and testing showed that 256x OS readings produced results that looked very comparable to 1n1418 diodes. (although some authorities say that timing-jitter may be resistant to this smoothing technique.) Even with oversampling, these short discharge times could become too brief to even count with an 8mhz Promini at temps above 50°C. However measuring cold temperatures can sometimes be more challenging than warm ones, and for those applications a fast discharging diode like a Schottky might be preferred. With communications overhead, it’s not unusual for an I2C sensor reading to take 1-2ms, so a Schottky might also be better for low power systems trying to minimize CPU runtime.
Testing a 1n4148 Signal Diode:
Temperature (°C) vs 1n4148 Reverse-bias Leakage Discharge Time (8 mHz clock cycles) Diode connected between D7 & ICU on D8. Diode wrapped in black heat shrink tubing. Blue dots are Excel trend-line fit.
The 1n1418 discharges more slowly, with raw Timer1 counts of about 36,000 at room temperatures (~5 msec.), and about 2000 counts of variation/degree at 25°C. Raw counts increase to ~158000 (~20 ms) at 6°C, with a delta of ~17000 counts per degree and the lower limit of this test saw raw counts 1.3 million (166 ms) at -15°C, and a delta of 140,000 counts/degree.
The 1n1418 is better sensor overall because it won’t drop below the Arduino’s timing capability at natural environment temperatures, and it’s discharge takes long enough that jitter becomes an insignificant source of error. Even in colder environments, 166ms of SLEEP_MODE_IDLE (which leaves Timer 1 running for the clock cycle count) only burns about 0.16 milliamp-seconds per reading on a Promini. That’s not going to break our power budget.
Calibration:
Its worth noting again that you must use a regulated system. Ideally, shifting supply voltage causes a corresponding change to the Schmitt trigger points on the I/O pins. That compensates to some extent, however batteries have significant thermal mass and this causes serious hysteresis problems when sensing temperature.
To calibrate my diodes, I covered them with black heat shrink tubing and taped them in physical contact with an si7051 sensor. Then I placed the logger into a rice filled double ceramic pot ( to add thermal mass) and moved the pot around the kitchen, from the radiators to the refrigerator & freezer. You want stable periods that let the ref & diode sensors equalize, using an average of 20 readings to smooth compressor wobble at the lower end, and those crest/peaks at higher temperatures.
Typical SI7051 (±0.1°C) reference temperature runfor calibrating the 1n1418 diode. Boxes indicate plateaus chosen for the calibration data points & coverage areas of closeup graphs shown in the ‘sets’ comparison below.
Excel trend-lines got reasonably close to the response from the Shottky & 1n1418; perhaps needing only one more term for a better fit. Since thermistors are also semiconductor devices I wondered if those diode decays would generate workable S&H constants if I treat the raw Timer1 counts AS IF they were resistance values from an NTC:
Here I used 20 reading averages to compensate for the fact that the diode is higher resolution than the Si7051 reference, and the long-integration 1n1418 readings have considerably less jitter than the IC sensor.
Then you can convert the discharge time to temperature with the Steinhart-Hart equation:
float Temp = log(RawDiodeDischargeTime); // note that Log(x) on Arduino is actually LN(x)! Temp = COEFF_A + (COEFF_B * Temp) + (COEFF_C*(Temp*Temp*Temp)); Temp = (1 /Temp) -273.15; // -273 converts Kelvin to Celsius
(Note: I usually save raw readings on the loggers & convert them later in Excel. I’ve been burned several times by loss of significant figures during calculation on the 8-bit 328P processor)
That equation has a quoted accuracy of about ±0.1°C over a 100 degree range when used for a thermistor, but does this hold with a diode sensor? Yes – but over a smaller 40 degree range: (Click Image to Enlarge)
Comparison of si7051 reference temps (blue) (°C) vs 1n1418 based S&H calculations (red)Two examples shown with different ‘center’ points (21°C on the left & 5°C on right) used to generate the three equation constants.
I choose these sets to show calculation errors creeping in as you move farther from the points used to generate the constants. The calculated temperatures in this example drift ~0.07°C from the reference at a distance of ~15°C from the center point. A tighter set with calibration points at 5, 21, & 36°C produces a near-perfect fit inside that range, with the trade-off that temps down at -14°C then show an increased deviation >0.1°C. Overall, it’s about 30% more error than I’d expect to see when calibrating a cheap 10K thermistors with the same points. Given that our Si7051 reference thermometer has a rated accuracy of ±0.13 °C (datasheet pg 7), I think the best we can achieve for this diode based method is ~±0.2 °C at typical cave temperatures.
So max-middle-min gives you about 40 degrees of usable range and you want at least one of your cal. points at the area of interest. That’s pretty good considering we are applying the thermistor equation to a different physical system. I will experiment with solver to see if models with moreparametersprovide a better fit, but this is already is good enough for most of our logger deployments.
Figure 14-1 I/O Pin Equivalent Schematic from the 328p datasheet. Those protection diodes can also cause problems when de-powering voltage dividers.
My gut feeling is that the re-purposed equation would work over a wider range ifthis was a single diode system. However AVR inputs are also connected to two protection diodes and a pull-up MOSFET. Each of these is subject to its own reverse bias leakage to some extent, with the upper protection diode acting in direct opposition to the discharge of the ‘sensor’ diode. In fact, you can simply run the ICU timing code with nothing at all connected to the D8 pin, and it will still give you a temperature based reading. That makes this the second ‘no parts temperature sensor’ method I’ve discovered for Arduino but, like LED’s, these diodes are low leakage; taking five seconds for a read at 20°C, and five minutes for a reading down at -14°C. Unless you change the prescaler, the raw numbers could exceed the range of easy calculation on a 328, and show significant hysteresis due to the mass of the chip & Promini board it’s attached to.
The implication here is that temperature sensing via this reverse bias decay method has a sweet spot somewhere between the too-rapid response of a Shottky diode (which approaches the counting limits of an 8MHz clock) and interference from the other stuff connected to Arduino I/O pins. 1n1418’s work well, but I’m sure there are other diodes out there that could do a better job. I have yet to find any good data on the long term stability of reverse bias leakage but we are not stressing the part by exceeding it’s reverse voltage rating, or running enough current to cause much self-heating. So I suspect that diode leakage is at least as stable as thermistor response over time. There’s a lot of further experimentation to do here, and given the tighter manufacturing spec, I’m curious to see if the method works with diode connected transistors which could make interchangeabletemperaturesensors possible.
I should also mention that some ‘better quality’ Arduino boards have temperature compensation embedded in their system oscillator. This is a bad thing for this ICU timing method because it introduces a sharp discontinuity in the clock speed when the comp. circuitry kicks in. The S&H constants can’t absorb that like the normal ‘thermal response’ of a cheap oscillator, so the method works better on some boards than others. Another potential problem is moisture accumulating on surfaces -which could provide an alternate current path to discharge the diode. So as with our LED light sensing, desiccants are required inside the logger housing.
the CODE:
D7 is simply acting as a convenient GND connection.
I’ve left this till last, because it’s essentially just a tweaked version of the ICU timing method I posted for reading thermistors. With the diode discharge you triggering on fall instead of rise, and you don’t have to read a reference resistor because we are treating the decay time as a resistance. The diode’s tiny internal capacitance charges through the INPUT_PULLUP resistor in a few nanoseconds, and there’s no need to discharge afterward.
#include <avr/power.h> // for peripherals shutdown
#include <avr/sleep.h> // to sleep the processor volatile boolean triggered; volatile uint16_t timer1CounterValue; volatile uint16_t overflowCount;
ISR (TIMER1_OVF_vect) { // triggers when T1 overflows: every 65536 system clock ticks
overflowCount++; }
ISR (TIMER1_CAPT_vect) { // transfers Timer1 when D8 reaches the threshold
if (triggered){ return; } // multiple trigger error catch
timer1CounterValue = ICR1; // Input Capture register (datasheet p117) triggered = true;
if ((TIFR1 & bit (TOV1)) && timer1CounterValue < 256){ // 256 is an arbitrary low value
overflowCount++; // if “just missed” an overflow
} bitClear(TIMSK1,TOIE1); // disable interrupts on Timer 1 overflow bitClear(TIMSK1,ICIE1); // disable input capture }
voidprepareForInterrupts() { noInterrupts ();
triggered = false; // reset for the do{ … }while(!triggered); loop TCCR1A = 0; // set entire TCCR1A register to 0
TCCR1B = 0; // same for TCCR1B
TIFR1 = bit (ICF1) | bit (TOV1); // clear flags so we don’t get a bogus interrupt
TCNT1 = 0; // initialize counter value to 0 overflowCount = 0; // reset overflow counter bitSet(TCCR1B,CS10); // set prescaler to 1x system clock (F_CPU) bitSet(TIMSK1,TOIE1); // interrupt on Timer 1 overflow bitSet(TCCR1B,ICNC1); // Input Capture Noise Canceler = 4x repeat b4 trigger bitClear(TCCR1B,ICES1); // Input Capture Edge Select ICES1: =0 for falling edge
// or use bitSet(TCCR1B,ICES1); to record rising edge bitSet(TIMSK1,ICIE1); // Enable input capture unit TIFR1 = bit (ICF1) | bit (TOV1); // clear flags again ( this may be unnecessary?) interrupts (); }
//========== READ DIODE connected between D7 —>|— D8 ICU =========== digitalWrite(7,LOW);pinMode(7, OUTPUT); // simply acting as GND digitalWrite(8,LOW);pinMode(8,OUTPUT); power_timer0_disable(); // otherwise Timer0 generates interrupts every 1us power_timer1_enable(); // this whole method depends on timer1 bitSet(ACSR,ACD); // Disable the analog comparator
//could disable other peripherals to save power during idle digitalWrite(8,INPUT_PULLUP); // charging the diode-capacitor (occurs VERY quickly) prepareForInterrupts (); noInterrupts ();
set_sleep_mode (SLEEP_MODE_IDLE); // leaves Timer1 running
sleep_enable();
PORTB ^= B00000001; // toggles OFF pull-up resistor on D8 (leaving pin in INPUT) TCNT1 = 0; // re-initialize Timer1 counter
do{ interrupts ();
sleep_cpu (); //sleep until D8 falls to the 33% threshold voltage noInterrupts (); }while(!triggered); //trapped here till TIMER1_CAPT_vect sets triggered=true
uint32_t diodeDischargeTime= ((uint32_t)overflowCount*65535) + timer1CounterValue; // change to uint64_t calculations when timing diodes that decay slowly
sleep_disable(); interrupts (); power_timer1_disable(); // cleanup power_timer0_enable(); // needed for delay, micros, etc.
(Note:Integer arithmetic on the Arduino defaults to 16 bit & never promotes to higher bit calculations, unless you cast one of the numbers to a high-bit integer first. After casting the Arduino supports 64-bit “long long” int64_t & uint64_t integers for large number calculationsbut they do gobble up lots of program memory space – typically adding 1 to 3k to the compiled size. Also Arduino’s printing function can not handle 64 bit numbers, soyou have to slice them into smaller pieces before using any .print functions)
Addendum 2021-01-24 Don’t sleep with regular I/O pins.
I’ve been noodling around with other discharge timing methods and come across something that’s relevant to using these methods on other digital pins. Here’s the schematic from the actual 328 datasheet, with a bit of highlighting added. The green path is PINx. It’s always available to the databus through the synchronizer (except for in SLEEP mode?) The purple path is PORTx. Whether or not it is connected to PINx depends on the state of DDRx (which is the yellow path.)
As shown in the figure of General Digital I/O, the digital input signal can be clamped to ground at the input of the Schmitt Trigger. The signal denoted SLEEP in the figure, is set by the MCU Sleep Controller in Power-down mode and Standby mode to avoid high power consumption if some input signals are left floating, or have an analog signal level close to VCC/2.
When sleeping, any GPIO that is not used an an interrupt input has its input buffer disconnected from the pin and in clamped LOW by the MOSFET.
Clearly D8 on the ICU must one of those ‘interrupt exceptions’ or the thermal discharge of the diode would have been grounded out by entering the sleep state. If you use a similar method on regular IO pins you can’t sleep the processor in that central do-while loop.
Here I’m using a 2019 (v. regulated) classroom logger to create a custom ‘Leaf Transmittance Index’ based on readings from an IR LED and the red channel of the RGB indicator already on the logger. Although using generic LED’s introduces non-optimal aspects wrt frequency & bandwidth, the trial successfully distinguished ‘healthy’ vs ‘unhealthy’ plant leaves where a simple visual inspection could not.
When we released the 2019 version of the classroom logger we updated the starter script to include a technique that uses the indicator LED as a light sensor. This under-appreciated technique leverages the timing capability of microprocessor inputs, rather than the more common approach of using an op-amp to amplify sensor output. Reversing the potential across a diode charges it’s internal capacitance, which can then be discharged by light photons hitting the surface. In ‘reverse bias’ mode, the photon flux is linearly related to the discharge current, however this depletion method changes the voltage across the capacitor at the same time (+ other factors), so we see a response with exponential decay instead of linear.
Electrically speaking, there is little difference between an LED and a typical photo-diode sensor however an LED’s capacitance is considerably smaller. ( 25 – 60 pF ) The tiny light sensing surface area of an LED (~0.1mm squared) only generates about 50 pA of discharge current in normal ambient conditions, and the reverse leakage through LED’s is exceptionally low (~0.002 pA). The net result is that LED’s are rather slow light detectors, and this phenomenon would be nothing more than a curiosity except for one important aspect: most LEDs detect a relatively narrow band of light wavelengths, making it possible to build a frequency-selective detector without the filters (or monochromators) you’d need to do the same job with photo-diodes or LDRs. That sensitivity band often has less drift over time than many types of filters and the discharge/photocurrent method has less temperature dependence than using the same LED in photovoltaic mode.
Illustration from that sameSensors review paper, but originally from:Novel fused-LEDs devices as optical sensors for colorimetric analysis. Talanta 2004, 63, 167–173. Sometimes these these emitter-receiver pairs are dip coated with a chemo-reactive membrane. Their next paper: Quantitative colorimetric analysis of dye mixtures using an optical photometer based on LED arraySensors and Actuators (2006), used a series of different emitter LEDs and a low band gap IR LED as a universal light detector. Xiao et al in 2009 used blue excitation LED’s that match the excitation wavelengths of common fluorescent dyes like fluorescein.
This makes a host of new LED-based instruments possible at the DIY level, and Forest Mims demonstrated this with some elegant experiments using near-IR LEDs detecting atmospheric water vapor, aerosols with twilight photometers, and he even proved that a single red LED reading provides a reasonable proxy for total PAR. (using a red gallium phosphide (GaP) LED with a wide (115 nm FWHM) absorption band @ 600-655 nm) Since Mim’s pioneering work in 1977, the number of applications for LED sensors has grown so fast that now it’s hard to keep up with the ‘review papers’, let alone the individual publications. Bench-top chemistry is seeing a host of fluorescence & reaction cell experiments based on frequency matched LED emitter-detector pairs. By rapidly toggling the same LED between emitting and detecting light, severalprojects have created other types of sensors like ocean PH. We can only imagine what will happen when up-converting nanoparticles get thrown into that mix.
What can we do with our logger
using this LED measurement technique?
Here a day of raw readings from all three LED channels are compared to an LDR in same classroom logger. The unit was deployed in a south facing window with diffusing tape over the housing surface.
Light detectors are often used to make measurements of energy balance, usually by tracking solar insulation. Using the RGB indicator LED already on the logger means we only have a limited number of light frequencies to work with, so we can’t create a ‘full spectrum’ pyranometer unless we use a more advanced solution like SparkFun’s Triad Spectroscopy Sensor . Combining that with good mounting bracket , would provide enough frequency coverage to match some commercial instruments.
Despite this limitation, a few dedicated groups have proven that LED photometers can still be quite capable. Most notably the 2-LED Globe program photometers by Brooks et al. at the Institute for Earth Science Research and Education It is quite inspiring to see students using hand-made instruments to produce research good enough to publish in peer-reviewed journals.
The Globe device uses a more traditional op-amp approach to reading the LEDs, but several aspects of those instruments are directly transferable to other light-sensing projects:
Students can manually aim the detectors at the sun, enabling a basic instrument to do the work of more complicated “sun tracking” machines that use directional control, collimators or shadow bands to measure diffuse irradiance. From Mims PAR paper: “Measurements are made each day at or near local solar noon when the solar disk is not obscured by clouds. Measurements are made by placing the radiometer on a level platform 175 cm above an open grass field. Two measurements are made of each channel; first the full sky and then the diffuse sky. The latter measurement is made when the diffuser of the sensor is shaded by a 19 mm diameter disk mounted on a rod.”
Two or more measurements are needed because it’s the difference between those readings that allows you to derive the property you were trying to measure. For example, if you had two diodes, with one responding to UV-a and the other responding to UV-b, the difference between those readings could be attributed to atmospheric ozone. Comparing readings from two IR sensors with one at the 940nm H20 absorption peak and another sensor at 890nm, would let you derive water content.
This also requires correcting for scattering/absorbance by the atmosphere( Path Length = 1/cos(θ) ) based on the suns angle in the sky (also note: many diy PAR projects hack the white plastic domes out of old photometers as cheap cosine correctors.) Better instruments also correct the ~1% / °C temperature coefficient of red spectrum LEDs, which is higher than that of silicone photodiodes.
The biggest challenge is determining unique the absorption band of the LED you are working with since that information is not supplied by manufacturers. The process usually requires testing the LED(s) with a wavelength scanning monochromatic light source.
All of Forest Mims LED-based experiments could be replicated with the Cave Pearl classroom logger using the capacitance discharge timing method instead of using an op-amp for current to voltage translation. But there’s a more information to be had from LED-sensors by anyone willing to do a little tinkering, especially in the area of vegetation monitoring.
A friend recently sent me a link to Rick Shory’s extensive work on the greenlogger which hit the in-box around the same time as SciAm’s article: Earth Stopped Getting Greener 20 Years Ago. Reading about the global decline in vegetation set me on a deep dive into how indexes are used in bio-physical monitoring:
NDVI was developed to estimate vegetation cover from remote sensing data. It is calculated from red and NIR spectral reflectance measurements, and the first key understanding is that spectral reflectances are normalized ratios of the reflected over the incoming radiation in each spectral band. Feeding those ratios into the NDVI calculation means that it can only produce values between -1 to +1.
The second key understanding is that using the ratio of the differenceof the red & infrared over their sumcorrects for the effect of the solar zenith angle. This eliminates irradiance from the equation, and largely corrects for differences due to topography and transmittance loss in the atmosphere. This allows the comparison of remote sensing data from different times of day, and different latitudes.
NASA uses NDVI asa anindicator of drought, When water limits vegetation growth, it has a different relative NDVI than when the same plant is hydrated because the spongy mesophyll layer deteriorates, and the plant absorbs more of that near-infrared light rather than reflecting it. This is a significant factor for agricultural yield prediction..
Moderate NDVI values represent low density of vegetation (0.1 to 0.3), while high values indicate dense vegetation (0.6 to 0.8) Zero indicates the water cover, and lower values of NDVI (-0.1 and below)correspond to barren areas of rock, sand, or urban/built-up. In addition to land cover classification, people use NDVI to infer parameters like Leaf Area Index (LAI), fractional light interception (fPAR), wildfire burn-area, and other aspects of the biological environment. But the NDVI index also has some limitations: Any time there’s very low vegetation cover (majority of the scene is bare earth), NDVI will be sensitive to that soil. On the other extreme, where there’s a large amount of vegetation, NDVI tends to saturate.
Over time NDVI has been tweaked in various ways and today there are a large number of different Broadband ‘Greenness’ Indexesthat accent different aspects of plant physiology. And the booming agricultural drone business seems to be inventing more by the day, with claims that somehow their camera tweak produces a new index that’s superior to those of it’s competitors, while their competitors make equally strident claims that company #1 doesn’t know what they are talking about. Public lab has an active community of people hacking cameras for this kind of imagery.
Can we use the RGB LED on the Classroom logger to measure a Vegetation index?
The first challenge is figuring out what we can actually detect, since each index works with a given set of frequencies. An LED will only detect light with higher energy photons than the light it emits, so a blue LED will be unable to detect red frequencies. LEDs generally have peak detection capability 20-60nm shorter than the wavelength they emit with the range widening, but this information is rarely available from manufacturers because it’s a use case they were never designed for. So there are not many sources that compare the emission and detection frequencies for the different LED chemistries. LEDs: Sources and Intrinsically Bandwidth-Limited Detectors(Figure 5) has a reasonable list of specific LEDs characterized as both sources and detectors, but even they didn’t bother to test a garden variety RGB. Fortunately a few researchers working onvisible lightcommunication projectshave tested them, generally finding that the blue emitter shifts into UV-A 320-400 nm detection range (possibly near one of the UV-A peaks of the phototropic action spectrum?) , the green emitter shifts down to about 440-460 nm (detecting in chlorophyll a/b blue absorption bands?), and the red LED channel shifts down to ~680nm, with a spectral spread 2 to 3 times widerthan its emission band. (overlaps the chlorophyll a red absorption peak?)
Testing to see how a 488nm dichroic mirror (blue cut-off filter) affected readings on the Green LED detection channel. Note that in this case the round lens was also removed from the top of the LED with sandpaper to both collumnate and diffuse the incoming light. But with the PTFE tape layer added later, the sanding was unnecessary for the index measurements.
But was this true for my LED? Since we didn’t have a “wavelength scanning monochromator” just lying around I tested the green channel with a blue-cutoff Dichroic Mirror Unlike regular filters, dichroic mirrors are intended to work only for incoming light that is normal to their surface – but given the tight 20 degree dispersion angle of typical 5mm leds, that’s probably OK. If I was looking for a range of different filters on the cheap, I’d probably look at safety glasses designed for laser work – they usually come with a well specified transmission curves & very sharp cut-offs at certain frequencies.
Sure enough, the discharge on the ‘blue-shifted’ green channel took more than 10x longer with the filter in place, indicating that the green channel was sensitive to frequencies below the filters 488nm cut-off. If the red LED channel also follows the pattern from Bipso’s paper, then red’s detection will include some green & some red, with a peak at yellow. We might be able to use these frequencies for a BLUE vs GREEN variant of NDVI, but several sources indicating that blue indexes were sub-optimal because that chlorophyll absorption is strongly overlapped by carotene. So blue based indexes usually show less contrast between stressed versus non-stressed plants. The loss of both the blue channel (now a UV detector) and the green channel (now a blue detector) meant that we need to add an IR led to have enough information for a viable index.
IR obstacle avoidance modules are one inexpensive source of IR LEDs. These sensors are somewhat limited when used for their intended purpose, but a little cut & paste lets you merge the emitter LED with a daylight filter ‘cap’ cut from the photo-transistor on the same board:
IR detectors can be sensitive to visible light unless you add a daylight filter, and this little hack is much cheaper than buying a Wratten 87, 87B, or 87C. These distance modules usually have LEDs which emit at 940nm, but the same shorter wavelength-shiftalso applies to the IR LED, and this pulls the detection peak into the 920nm range – safely out of the absorbance well created by atmospheric water vapor(?). (something to keep in mind if you are actually trying to build a water vapor sensor – you probably need to select an emitter at least 25 nm above 940)
Note that the negative terminal of the IR LED is under the same screw terminal (D7) as the common cathode leg of the RGB indicator led. Columnating the LEDs with heat shrink tubing should also make them less sensitive to light reflections inside the logger.Note: our GitHub code uses a port command that assumes that shared ‘negative’ pin to be one ofD3 to D7 on the pro mini. Here I’ve set #define LED_GROUND_PIN 7 as the common GND connection, RED_PIN 6, GREEN_PIN 5, and the positive side of the IR led is on pin D8. Set #define BLUE_PIN 8 to force the former blue pin code to take the IR LED reading. Also note: that our code initially ‘lights’ the indicator LED via INPUT_PULLUP mode, instead of setting the pins to OUTPUT & HIGH, so limiting resistors are not necessary for that step.
Even with a filter and heat shrink tubing the IR LED was easily saturated, and it took 28 layers of plumbers tape to bring readings from the IR LED into approximately the same range as the RGB readings (which had only 1 layer of diffusing PTFE tape on it) This was done with the logger tethered a laptop, displaying readings in the serial monitor window. I simply added one layer at a time until the readings under direct full sunshine for both the IR & Red channel were in the 300 to 500 range (using our logger code from Github). Then I add some heat shrink to hold those layers of teflon tape in place, leaving only the round dome of the LED(s) exposed.
Even with IR on board, the lack of a sharp red detector means we can’t produce the traditional NDVI. (unless we add a Wratten25) With the ‘red’ LED channel actually detecting from 525 to 650 (FWHM 560-610) and peaking around 575nm we have to invent our own pseudo index that might be described as ‘lime-green’ NDVI. This actually isn’t so bad, as severalrecentpapers demonstrate that green-based NDVIs are more sensitive to chlorophyll than the red based indexes, and they have stronger correlation with total leaf area. (It’s also worth noting how close we are to the bands typically used for pulse oximetry, where the reading from a red LED at 600nm is ratioed with a second measurement from an IR-LED responding at 950nm)
Other Challenges:
With a tinfoil wrap, light can only enter through the 5 x 8cm exposure window above the LEDs. One layer of blank label-maker tape on the lid diffuses the light and prevents the plastic struts inside the housing from creating hot-spots. A final outer wrap of clear packing tape protects the tinfoil.
Bandgap voltages vary with temperature, changing the LED emission wavelength by ~ 0.1nm/°C. Detection wavelengths should follow suit, so it’s probably best to make sure the temperature varies as little as possible between our scale points and the target readings. Since the LED detectors are inside a Plano box, there is potential for some frequencies to be lost, but materials like high density polyethylene (HDPE) have remarkably smooth absorption curves that don’t become extreme until you reach UV. The fact that Vi’s are ‘a ratio of ratios’ means that we have to compare the raw sensor readings to direct insulation values before the index can be calculated. Housing losses should affect the high reference and the target readings in the same way, so it should not throw off the final index – essentially we treat it like transmittance loss in the atmosphere. And I need to use a desiccant because this is a capacitance – discharge method, so things like ambient humidity could affect the capacitance.
Since we can’t know the responsivity of our garden variety LEDs without lab testing, the best we can do is standardize the data by scaling it against a “maximum” reading (when the sun is shining directly on the top of the logger) and a second “minimum” point for each channel (obtained by covering the entire logger with tinfoil). The dark point reading also address thermal leakage through the LEDs. One drawback of this mcu-based method is that the raw discharge-time readings follow an exponential decay curve, so we need to take the log of those readings to linearize the data before scaling, and since our decay-time readings are inversely related to the photon flux, we need to apply1/LN(reading)to all databefore the max/min scaling. Technically speaking, photon flux is not the same as irradiance, but the the index’s normalization sweeps that little issue under the rug too.
Using the sun as our high scale point necessitates that the readings are done under a clear blue sky. Clouds passing overhead between readings (or other haze / humidity variations)could change the ratio of IR to visible light more than the plants you are trying to monitor:
Rain is another potential complication, as water strongly absorbs IR. So the readings have to be taken long enough after a rainfall that no water droplets are present on the surfaces. So with the basic 2x LED configuration I’m using here, you have to wait for good weather, and take the readings under a clear blue sky, between mid morning & mid afternoon.
Cloud induced variations could be compensated by putting two sets of sensors on the logger (one pointing up & one pointing down) for simultaneous correction of direct insolation vs surface reflectance, but for now this is just a prototype trial to see if a decent ‘lab exercise’ can be developed for the Cave Pearl Loggers with minimal additions to the basic build.
Does it work?
Backyard trial testing the reflectivity of my lawn in an area deliberately chosen as “unhealthy” grass .
With all the rough assumptions up to this point I was surprised to find the reflectance readings falling within the broad range of ‘textbook’ values. A reading 1m above a relatively healthy section of my lawn produced an index of 0.39 while a reading above a mangy half-dead section (photo: right) produced a much lower index value around 0.215 A patch of bare dirt read at 0.044, and my gravel driveway produced an index of -0.13 The front flower garden produced a reading of 0.292 It’s hard to know how representative this is given the wide range of values listed for different plant species in the various spectral libraries.
With the challenges of species variation & water condition, the use of verified bright & dark targets is pretty common in biophysical sensing. White panels coated with barium sulfate paint, Komatex, or Teflon are sometimes used because they have reflectance near 100% with very few specular artifacts. NASA’s Aster Spectral Library suggests that most flat black paints have a similar response through visible and NIR, so some Rustoleum on a big sheet of cardboard might work as a 4% calibration point.
A better idea for the classroom:
Covering large areas with a grid of these single-shot readings would take a substantial amount of time, and even relatively short trials run into issues with trees and other large shadows creeping into the test patches throughout the day. So logging reflectance is more suited to long-term measurements of vegetation cover at single location. (or forest canopy transmittance)
After a bit more reading, I began to notice a pattern:
(click to enlarge) Leaves cover the input window, secured to prevent wind shifting.
So index calculations ought to work with light that is transmitted through the leaves because the two curves contain very similar information. With the standard reflectance-based NDVI, ratings between 0 and 0.33 indicate unhealthy or stressed plants, 0.33 to 0.66 is moderately healthy, and 0.66 to 1 is very healthy. Flipping the calculation to a transmission-based version will shift those values significantly, but we can still use that trend as a rough guide to whether the method is working.
As luck would have it, this insight occurred just before another fieldwork trip, so it was more than a week before I could test the idea. The first trials used leaves from tropical almond trees which are dry-season deciduous. Re-capturing some of their ‘energetically expensive’ chlorophyll turns the leaves pinkish-red or yellow-brown due to the leftover pigments such as violaxanthin, lutein, and zeaxanthin. These Xanthophylls are yellow pigments from one of the two major divisions of the carotenoid group (the other division is formed by the carotenes). They act to modulate light energy and may also serve as a non-photochemical quenching agents to deal with excited states of chlorophyll that are overproduced during photosynthesis under the intense light conditions.
Leaf Number
Red RGB channel: scaled
% of full sun reading
IR LED reading: scaled
% of full-sun reading
(gYr) “Transmission based” NDVI
#1 (green)
37.24
82.81
0.380
#2 (orange)
59.47
83.13
0.166
#3 (red)
42.22
80.95
0.314
#4 (yellow)
67.62
82.91
0.102
The overlap of the sub-LEDs at 550nm is leveraged for transmission (green) and photodetection (red) in short range visible light communications.
All leaves were ‘fresh-picked’ from the tree, and the percent transmission numbers were averaged from five readings taken one minute apart. Natural light is notoriously variable, so most index sensors use considerably more sample averaging than that.
It’s not surprising that the yellow leaf was well discriminated, but the fact that the green & red leaves produced similar values highlights an issue with our rough prototype: the widened spectral spread of the red LED channel makes it difficult to distinguish between light frequencies on either side of the response curve.
Leaf (click to enlarge)
Red RGB channel: scaled
% of full sun reading
IR LED reading: scaled
% of full-sun read
(gYr) T-NDVI
#10 (Dead on ground)
26.96
65.38
0.416
#11 (yellow patch)
44.90
79.45
0.278
#12 (green& healthy)
34.27
76.17
0.379
#13 (>50%yellow)
55.56
77.79
0.167
Despite the limited red/green selectivity, this set shows that, at least for ‘fresh’ leaves, our custom index can still discriminate the loss of the chlorophyll. The dead leaf suggests that will only work up to a point. Dead vegetation reflects a greater amount of energy than healthy vegetation throughout the visible spectrum but reflects less IR, so any actual ‘dead-spots’ will raise a transmission-based index value. This is a limitation for our method – which does not produce a natural separation between healthy and dead vegetation like a reflectance based index would. This would be serious problem for remote sensing applications, but when a leaf that you can hold in your hand is that badly off, you probably don’t need a sensor to tell you the plant is not thriving.
Does this transfer to other plant species?
Leaf (click to enlarge)
Red RGB channel:
scaled % of full sun
IR LED reading: scaled
% of full-sun read
(gYr) T-NDVI
#14 (Green leaf – from stressedplant1)
39.29
77.51
0.327
#15 (Brown spots – from stressed plant1)
37.71
76.23
0.338
#16 (Greenleaffrom healthyplant2)
21.83
68.05
0.514
#18 (>50% yellow – other stressed plant3)
32.65
65.47
0.334
As expected, the overall light transmission numbers were different for palm than they were for the tropical almond leaves. So a calibration set would need to be created for each plant species to put these numbers into context. I’m assuming the green/yellow discrimination is due to chlorophyll levels in the almond leaves, but there could be other confounding factors like Anthocyanin in the hardier palm leaves. (Antho. also absorbs in our sensor band, and is abundant in senescing leaves)
While #14 and #15look different they were taken from same plant, and the yellow-brown spots on #15 make it clear the plant was under some kind of stress. Visually,I would not have been able to distinguish leaf #14 from #16but the index identified #14 as being from an “unhealthy” plant. Given the relatively wide spectrum we are working with here, this is a remarkable result – suggesting that with a bit of homework to find LED’s with tighter detection bands, we could produce an inexpensive chlorophyll meter (like theSPAD?) or we could tune the idea for other pigments/applications. It should be possible to at least match the performance of the leaf color chartscurrently being used to assess when fertilizer is needed. Using a white LED emitter above the leaf could enable a variant of our simple approach that was not dependent on the sun, and adding couple of reads using only green & red emitters above the leaf would enable us to distinguish which side of that wide detection curve we were on. Commercial chlorophyll meters sometimes use emitters at 660 nm & 940 nm , measuring both reflectance AND transmission so an accurate absorbance value can be calculated. Even then plants cease to create chlorophyll once a certain threshold has been reached, so these meters are often used to detect deficiencies (by comparison to a well fertilized control group) rather than concentrations.
And finally, a classroom lab could combine this kind of index-based characterization with paper chromatography to verify the pigments in the leaves via their Rf factors. A more advanced approach would do this quantitatively, turning this index into a true diagnostic tool. This could also be a good accompaniment to the many other Arduino basedplant monitoring projects, as a growth or health verification stage. In northern climates this could even be done with house plants by taking readings ‘before’ and ‘after’ some experimental intervention. With those large temporal separations, you would want to take new max/min readings each time for the scaling & avoid using artificial light sources as these introduce frequency artifacts that could interfere with the index.
“The same procedure is used for oxygen saturation measurement. Here the principle is to measure the absorption of the hemoglobin in the blood. Oxygenated hemoglobin (HbO2) has a significantly different absorption of light than non-oxygenated hemoglobin (Hb). To detect this difference, the skin is illuminated with one red and one IR LED light and a photodetector measures the absorption.”
AN685 figure 8:The RC rise-time response of the circuit allows microcontroller timers to be used to determine the relative resistance of the NTC element.
For more than a year we’ve been oversampling the Arduino’s humble ADC to get >16bit ambient temperature readings from a 20¢ thermistor. That pin-toggling method is simple and delivers solid results, but it requires the main CPU to stay awake long enough to capture multiple readings for the decimation step. (~200 miliseconds @ 250 kHz ADC clock) While that only burns 100 mAs per day with a typical 15 minute sample interval, a read through Thermistors in Single Supply Temperature Sensing Circuits hinted that I could ditch the ADC and read those sensors with a pin-interrupt method that would let me sleep the cpu during the process. An additional benefit is that the sensors would draw no power unless they were actively being read.
The resolution of time based methods depends on the speed of the clock, and Timer1 can be set with a prescalar = 1; ticking in step with the 8 mHz oscillator on our Pro Mini based data loggers. The input capture unit can save Timer1’s counter value as soon as the voltage on pin D8 passes a high/low threshold. This under appreciated feature of the 328p is more precise than typical interrupt handling, and people often use it to measure the time between rising edges of two inputs to determine the pulse width/frequency of incoming signals.
You are not limited to one sensor here – you can line them up like ducks on as many driver pins as you have available. As long as they share that common connection you just read each one sequentially with the other pins in input mode. Since they are all being compared to the same reference resistor, you’ll see better cohort consistency than you would by using multiple series resistors.
Using 328p timers is described in detail at Nick Gammons Timers & Counters page, and I realized that I could tweak the method from ‘Timing an interval using the input capture unit’(Reply #12) so that it recorded only the initial rise of an RC circuit. This is essentially the same idea as his capacitance measuring method except that I’m using D8’s external pin change threshold at 0.66*Vcc rather than a level set through the comparator. That’s almost the same as one RC time constant (63%) but the actual level doesn’t matter so long as it’s consistent between the two consecutive reference & sensor readings. It’s also worth noting here that the method doesn’t require an ICU peripheral – any pin that supports a rising/ falling interrupt can be used. (see addendum for details) It’s just that the ICU makes the method more precise which is important with small capacitor values. (Note that AVRs have a Schmitt triggers on the digital GPIO pins. This is not necessarily true for other digital chips. For pins without a Schmitt trigger, this method may not give consistent results)
Using Nicks code as my guide, here is how I setup the Timer1 with ICU:
#include <avr/sleep.h> // to sleep the processor #include <avr/power.h> // for peripherals shutdown #include <LowPower.h> // https://github.com/rocketscream/Low-Power
float referencePullupResistance=10351.6; // a 1% metfilm measured with a DVM volatile boolean triggered; volatile unsigned long overflowCount; volatile unsigned long finishTime;
ISR (TIMER1_OVF_vect) { // triggers when T1 overflows: every 65536 system clock ticks overflowCount++; }
ISR (TIMER1_CAPT_vect) { // transfers Timer1 when D8 reaches the threshold sleep_disable(); unsigned int timer1CounterValue = ICR1; // Input Capture register (datasheet p117) unsigned long overflowCopy = overflowCount;
if ((TIFR1 & bit (TOV1)) && timer1CounterValue < 256){ // 256 is an arbitrary low value overflowCopy++; // if “just missed” an overflow }
if (triggered){ return; } // multiple trigger error catch
finishTime = (overflowCopy << 16) + timer1CounterValue; triggered = true; TIMSK1 = 0; // all 4 interrupts controlled by this register are disabled by setting zero }
voidprepareForInterrupts() { noInterrupts (); triggered = false; // reset for the do{ … }while(!triggered); loop overflowCount = 0; // reset overflow counter TCCR1A = 0; TCCR1B = 0; // reset the two (16-bit) Timer 1 registers TCNT1 = 0; // we are not preloading the timer for match/compare bitSet(TCCR1B,CS10); // set prescaler to 1x system clock (F_CPU) bitSet(TCCR1B,ICES1); // Input Capture Edge Select ICES1: =1 for rising edge // or use bitClear(TCCR1B,ICES1); to record falling edge
// Clearing Timer/Counter Interrupt Flag Register bits by writing 1 bitSet(TIFR1,ICF1); // Input Capture Flag 1 bitSet(TIFR1,TOV1); // Timer/Counter Overflow Flag
bitSet(TIMSK1,TOIE1); // interrupt on Timer 1 overflow bitSet(TIMSK1,ICIE1); // Enable input capture unit interrupts (); }
With the interrupt vectors ready, take the first reading with pins D8 & D9 in INPUT mode and D7 HIGH. This charges the capacitor through the reference resistor:
//========== read 10k reference resistor on D7 ===========
power_timer0_disable(); // otherwise Timer0 generates interrupts every 1us
pinMode(7, INPUT);digitalWrite(7,LOW); // our reference pinMode(9, INPUT);digitalWrite(9,LOW); // the thermistor pinMode(8,OUTPUT);digitalWrite(8,LOW); // ground & drainthe cap through 300Ω LowPower.powerDown(SLEEP_30MS, ADC_OFF, BOD_ON); //overkill:5T is only 0.15ms
pinMode(8,INPUT);digitalWrite(8, LOW); // Now pin D8 is listening
set_sleep_mode (SLEEP_MODE_IDLE); // leaves Timer1 running prepareForInterrupts (); noInterrupts (); sleep_enable(); DDRD |= (1 << DDD7); // Pin D7 to OUTPUT PORTD |= (1 << PORTD7); // Pin D7 HIGH -> charging the cap through 10k ref
do{ interrupts (); sleep_cpu (); //sleep until D8 reaches the threshold voltage noInterrupts (); }while(!triggered); //trapped here till TIMER1_CAPT_vect changes value of triggered
sleep_disable(); // redundant here but belt&suspenders right? interrupts (); unsigned longelapsedTimeReff=finishTime; // this is the reference reading
Now discharge and then repeat the process a second time with D7 & D8 in INPUT mode, and D9 HIGH to charge the capacitor through the thermistor:
//==========read the NTC thermistor on D9 ===========
pinMode(7, INPUT);digitalWrite(7,LOW); // our reference pinMode(9, INPUT);digitalWrite(9,LOW); // the thermistor pinMode(8,OUTPUT);digitalWrite(8,LOW); // ground & drain the cap through 300Ω LowPower.powerDown(SLEEP_30MS, ADC_OFF, BOD_ON);
pinMode(8,INPUT);digitalWrite(8, LOW); // Now pin D8 is listening
set_sleep_mode (SLEEP_MODE_IDLE); prepareForInterrupts (); noInterrupts (); sleep_enable(); DDRB |= (1 << DDB1); // Pin D9 to OUTPUT PORTB |= (1 << PORTB1); // set D9 HIGH -> charging through 10k NTC thermistor
sleep_disable(); interrupts (); unsigned longelapsedTimeSensor=finishTime; //this is your sensor reading
Now you can determine the resistance of the NTC thermistor via the ratio:
unsigned long resistanceof10kNTC= (elapsedTimeSensor * (unsigned long) referencePullupResistance) /elapsedTimeReff;
pinMode(9, INPUT);digitalWrite(9,LOW);pinMode(7, INPUT);digitalWrite(7,LOW); pinMode(8,OUTPUT);digitalWrite(8,LOW); //discharge the capacitor when you are done
The integrating capacitor does a fantastic job of smoothing the readings and getting the resistance directly eliminates 1/2 of the calculations you’d normally do with a thermistor. To figure out your constants, you need to know the resistance at three different temperatures. These should be evenly spaced and at least 10 degrees apart with the idea that your calibration covers the range you expect to use the sensor for. I usually put loggers in the refrigerator & freezer to get points with enough separation from normal room temp with the thermistor taped to the surface of a si7051. Then plug those values into the thermistor calculator provided by Stanford Research Systems.
Just for comparison I ran a few head-to-head trials against my older dithering/oversampling method:
°Celcius vs time [1 minute interval] si7051 reference [0.01°C 14-bit resolution, 10.8 msec/read ] vs. Pin-toggle Oversampled 5k NTC vs. ICUtiming 10k NTC. I’ve artificially separated these curves for visual comparison, and the 5K was not in direct contact with the si7051 ±0.1 °C accuracy reference, while the 10k NTC was taped to the surface of chip – so some of the 5ks offset is an artifact. TheTimer1 ratios delver better resolution than 16-bit (equivalent) oversampling in 1/10 the time.
There are a couple of things to keep in mind with this method:
si7051 reference temp (C) vs 10k NTC temp with with a ceramic 106 capacitor. If your Ulong calculations overflow at low temperatures like the graph above, switch to doing the division before the multiplication or use larger ‘long-long’ variables. Also keep in mind that the Arduino will default to 16bit calculations unless you set/cast everything to longer ints. Or you could make you life easy and save the raw elapsedTimeReff & elapsedTimeSensor values and do the calculations later in Excel. Whenever you see a sudden discontinuity where the result of a calculation suddenly takes a big jump to larger or smaller values – then you should suspect a variable type/cast error.
1) Because my Timer1 numbers were small with a 104 cap I did the multiplication before the division. But keep in mind that this method can easily generate values that over-run your variables during calculation. Ulong MAX is 4,294,967,295 so the elapsedTimeSensor reading must be below 429,496 or the multiplication overflows with a 10,000 ohm reference. Dividing that by our 8mHz clock gives about 50 milliseconds. The pin interrupt threshold is reached after about one rise-time constant so you can use an RC rise time calculator to figure out your capacitors upper size limit. But keep in mind that’s one RC at the maximum resistance you expect from your NTC – that’s the resistance at the coldest temperature you expect to measure as opposed to its nominal rating. (But it’s kind of a chicken&egg thing with an unknown thermistor right? See if you can find a manufacturers table of values on the web, or simply try a 0.1uF and see if it works). Once you have some constants in hand Ametherm’s Steinhart & Hart page lets you check the actual temperature at which your particular therm will reach a given resistance. Variable over-runs are easy to spot because the problems appear & then disappear whenever some temperature threshold is crossed. I tried to get around this on a few large-capacitor test runs by casting everything to float variables, but that lead to other calculation errors.
(Note:Integer arithmetic on the Arduino defaults to 16 bit & never promotes to higher bit calculations, unless you cast one of the numbers to a high-bit integer first. After casting the Arduino supports 64-bit “long long” int64_t & uint64_t integers for large number calculationsbut they do gobble up lots of program memory space – typically adding 1 to 3k to the compiled size. Also Arduino’s printing function can not handle 64 bit numbers, soyou have to slice them into smaller pieces before using any .print functions)
2)This method works with any kind of resistive sensor, but if you have one that goes below ~200 ohms (like a photoresistor in full sunlight) then the capacitor charging could draw more power than you can safely supply from the D9 pin. In those cases add a ~300Ω resistor in series with your sensor to limit the current, and subtract that value from the output of the final calculation. At higher currents you’ll also have voltage drop across the mosfets controlling the I/O pins (~40Ω on a 3.3v Pro Mini), so make sure the calibration includes the ends of your range.
There are a host of things that might affect the readings because every component has temperature, aging, and other coefficients, but for the accuracy level I’m after many of those factors are absorbed into the S&H coefficients. Even if you pay top dollar for reference resistors it doesn’t necessarily mean they are “Low TC”. That’s why expensive resistors have a temperature compensation curve in the datasheet. What you’re talking about in quality references is usually low long-term drift @ a certain fixed temperature (normally around 20 ~ 25°C) so ‘real world’ temps up at 40°C are going to cause accelerated drift.
The ratio-metric nature of the method means it’s almost independent of value of the capacitor, so you can get away with a cheap ceramic cap even though it’s value changes dramatically with temperature. (& also with DC bias ) In my tests thermal variation of a Y5V causes a delta in the reference resistor count that’s about 1/3 the size of the delta in the NTC thermistor. Last year I also found that the main system clock was variable to the tune of about 0.05% over a 40°C range, but that shouldn’t be a problem if the reference and the sensor readings are taken immediately after one another. Ditto for variations in your supply. None of it matters unless it affects the system ‘between’ the two counts you are using to make the ratio.
The significant digits from your final calculation depend on the RC rise time, so switching to 100k thermistors increases the resolution, as would processors with higher clock speeds. You can shut down more peripherals with the PRR to use less power during the readings as long as you leave Timer1 running with SLEEP_MODE_IDLE. I’ve also found that cycling the capacitor through an initial charge/discharge cycle (through the 300Ω on D8)improved the consistency of the readings. That capacitor shakedown might be an artifact of the ceramics I was using but you should take every step you can to eliminating errors that might arise from the pre-existing conditions. I’ve also noticed that the read order matters, though the effect is small.
Code consistency is always importantwith time based calibrations no matter how stable your reference is. Using a smaller integrating capacitor makes it more likely that your calibration constants will be affected by variations in code execution time. Any software scheme is going to show some timing jitter because both interrupts and the loop are subject to being delayed by other interrupts. Noise on the rail from a bad regulator will directly affect your readings. Using a larger 1uF (105) capacitor is a safer option than a 104. This method bakes a heap of small systemic errors into those NTC calibration constants, and this approach works because most of those errors are thermal variations too. However code-dependent variations mess with the fit of the thermistor equation as they tend to be independent of temperature, so make sure the deployment uses EXACTLY the same code that you calibrated with. We are passing current through the thermistor to charge the capacitor so there will inevitably be some self heating – if your calibration constants were derived with 1-pass, then your deployment must also read only one pass. If you calibrate with a 104 cap & then switch to a 105, then the temps recorded with the larger 105 will be offset to higher than actual. Oversampling works fine to boost resolution, but since it leverages multiple passes that also cause much more self heating.
Will this method replace our pin-toggled oversampling? Perhaps not for something as simple as a thermistor since that method has already proven itself in the real world, and I don’t really have anything better to do with A6 & A7. And oversampling still has the advantage of being simultaneously available on all the analog inputs, while the ICU is a limited resource. Given the high resolution that’s potentially available with the Timer1/ICU combination, I might save this method for sensors with less dynamic range. I already have some ideas there and, of course, lots more testing to do before I figure out if there are other problems related to this new method. I still haven’t determined what the long-term drift is for the Pro Mini’s oscillator, and the jitter seen in the WDT experiment taught me to be cautious about counting those chickens.
Addendum: Using the processors built in pull-up resistors
After a few successful runs, I realized that I could use the internal pull-up resistor on D8 as my reference; bringing it down to only three components. Measuring the resistance of the internal pull-up is simply a matter of enabling it and then measuring the current that flows between that pin and ground. I ran several tests, and the Pro Mini’s the internal pullups were all close to 36,000 ohms, so my reference would become 36k + the 300Ω resistor needed on D8 to safely discharge of the capacitor between cycles. I just have to set the port manipulation differently before the central do-while loop:
PORTB |= (1 << PORTB0); // enable pullup on D8 to start charging the capacitor
A comparison of the two approaches:
°Celcius vs time: (Post-calibration, 0.1uF X7R ceramic cap) si7051 reference [±0.01°C] vs. thermistor ICU ratio w 10k reference resistor charging the capacitor vs.the same thermistor reading calibrated using therise time throughpin D8’s internal pull-up resistor. The reported ‘resistance’ value of the NTC themistor was more than 1k different between the two methods, with the 10k met-film reference providing values closer to the rated spec.However the Steinhart-Hart equation constants from the calibration were also quite different, so the net result was indistinguishablebetween the two references in the room-temperatures range.
In reality, the pull-up “resistor” likely isn’t a real resistor at all, but an active device made out of transistor(s) which looks like a resistor when operated in its active region. I found the base-line temperature variance to be about 200 ohms over my 40°C calibration range. And because you are charging the capacitor through the reference and through the thermistor, the heat that generates necessarily changes those values during the process. However when you run a calibration, those factors get absorbed into the S&H coefficients provided you let the system equilibrate during the run.
As might be expected, all chip operation time affects the resistance of the internal pull-up, so the execution pattern of the code used for your calibration must exactly match your deployment code or the calibration constants will give you an offset error proportional to the variance of the internal pull-up caused by the processors run-time. Discharging the capacitor through D8, also generates some internal heating so those (~30-60ms) sleep intervals also have to be consistent. In data logging applications you can read that reference immediately after a long cool-down period of processor sleep and use the PRR to reduce self-heating while the sample is taken.
Another issue was lag because that pull-up is embedded with the rest of the chip in a lump of epoxy. This was a pretty small, with a maximum effect less than ±0.02°C/minute and I didn’t see that until temperatures fell below -5 Celsius. Still, for situations where temperature is changing quickly I’d stick with the external reference, and physically attach it to the thermistor so they stay in sync.
Addendum: What if your processor doesn’t have an Input Capture Unit?
With a 10k / 0.1uF combination, I was seeing Timer1 counts of about 5600 which is pretty close to one 63.2% R * C time constant for the pair. That combination limits you to 4 significant figures and takes about 2x 0.7msec per reading on average. Bumping the integrating capacitor up to 1uF (ceramic 105) multiplies your time by a factor of 10 – for another significant figure and an average of ~15msec per paired set readings. Alternatively, a 1uF or greater capacitor allows you to record the rise times with micros() (which counts 8x slower than timer1) and still get acceptable results. (even with the extra interrupts that leaving Timer0 running causes…) So the rise-time method could be implemented on processors that lack an input capture unit – provided that they have Schmitt triggers on the digital inputs like the AVR which registers a cmos high transition at ~0.6 * Vcc.
void d3isr() { triggered = true; }
pinMode(7, INPUT);digitalWrite(7,LOW); // reference resistor pinMode(9, INPUT);digitalWrite(9,LOW); // the thermistor pinMode(3,OUTPUT);digitalWrite(3,LOW); // ground & drainthe cap through 300Ω LowPower.powerDown(SLEEP_30MS, ADC_OFF, BOD_ON); //5T is only 1.5ms w 10k
pinMode(3,INPUT);digitalWrite(3, LOW); // Now pin D3 is listening
triggered = false; set_sleep_mode (SLEEP_MODE_IDLE); // leave micros timer0 running for micros() unsigned long startTime = micros(); noInterrupts (); attachInterrupt(1, d3isr, RISING);// using pin D3 here instead of D8 DDRD |= (1 << DDD7); // Pin D7 to OUTPUT PORTD |= (1 << PORTD7); // Pin D7 HIGH -> charging the cap through 10k ref sleep_enable();
Then repeat the pattern shown earlier for the thermistor reading & calculation. I’d probably bump it up to a ceramic 106 for the micros method just for some extra wiggle room. The method doesn’t really care what value of capacitor you use, but you have to leave more time for the discharge step as the size of your capacitor increases. Note that I’m switching between relatively slow digital writes (~5 µs each) outside the timing loop, and direct port manipulation (~200 ns each) inside the timed sequences to reduce that source of error.
Addendum 20191020:
After running more tests of this technique, I’m starting to appreciate that even on regulated systems, you always have about 10-30 counts of jitter in the Timer1 counts, even with the 4x input capture filter enabled. I suspect this is due to the Shimitt triggers on the digital pins also being subject to noise/temp/etc and because other system interrupts find a way to sneak in. A smaller 104 integrating capacitor you make your readings 10x faster, but the fixed jitter error is a correspondingly larger percentage of that total reading (100nF typically sees raw counts in the 3000 range for the 10k reference). By the time you’ve over-sampled 104 capacitor readings up to a bit-depth equivalent of the single-pass readings with the 105 ceramic capacitor ( raw counts in the 60,000 range for the same 10k ref), you’ve spent about the same amount of run-time getting to that result. (Keep in mind that even with a pair of single-pass readings using the 10k/104 capacitor combination; raw counts of ~3500 yield a jitter-limited thermistor resolution of about 0.01C)
So, as a general rule of thumb, if your raw Timer1 counts are in the 20,000-60,000 range, you get beautiful smooth results no matter what you did to get there. This translates into about 2.5 – 7.5 milliseconds per read, and this seems to be a kind of ‘sweet-spot’ for ICU based timing methods because the system’s timing jitter error is insignificant at that point. With 5 significant figures in the raw data, the graphs are so smooth they make data from the si7051 reference I’m using look like a scratchy mess.
Another thing to watch out for is boards using temperature compensated oscillators for their system clock. ICU methods work better with the crappy ceramic oscillators on clone boards because their poor thermal behavior just gets rolled into the thermistors S&H constants during calibration. However better quality boards like the 8mhz Ultra from Rocket Scream have compensation circuits that kick in around 20C, which puts a weird discontinuity into the system behavior which can not be gracefully absorbed by the thermistor constants. So the net result is that you get worse results from your calibration with boards using temperature compensation on their oscillators.
The thermistor constants also neatly absorb the tempco and even offset errors in the reference resistor. So if you are calibrating a thermistor for a given logger, and it will never be removed from that machine, you can set your reference resistor in the code to some arbitrary perfect value like 10,000 ohms, and just let the calibration push any offset between the real reference and your arbitrary value into the S&H constants. This lets you standardize the code across multiple builds if you are not worried about ‘interchangeability’.
And finally, this method is working well on unregulated systems with significant battery supply variations as I test the loggers down to -15C in my freezer. In addition to battery droop, those cheap ceramic caps have wicked tempcos, so the raw readings from the reference resistor are varying dramatically during these tests, but the ‘Ratio/Relationship’ of the NTC to this reference is remaining stable over a 30-40C range, with errors in the ±0.1°C range, relative to the reference. (Note: si7051 itself has ±0.13°C, so the net is probably around ±0.25°C)
“Using a thin film resistor at ±10 ppm/°C would result in a 100 ppm (0.01%) error if the ambient changes by only 10°C. If the temperature of operation is not close to the midpoint of the temperature range used to quantify the TCR at ±10 ppm/°C, it would result in a much larger error over higher temperature ranges. A foil resistor would only change 0.0002% to 0.002% over that same 10°C span, depending upon which model is used (0.2 ppm/°C to 2 ppm/°C.) And for larger temperature spans, it would be even more important to use a resistor with an inherently low TCR.”
During calibration, I usually bake the reference resistor’s tempco into the thermistor constants by simply assuming the resistor is a ‘perfect 10k ohm’ during all calculations. (this also makes the code more portable between units) However this does nothing to correct long-term drift in your reference. If you want to tackle that problem with a something like a $10 Vishay Z-foil resistor (with life stability of ± 0.005 %) then it’s probably also worth adding Plastic film capacitors which have much better thermal coefficients: Polyphenylene sulfide(PPS ±1.5%) or Polypropylene (CBB or PP ±2.5%). A quick browse around the Bay shows those are often available for less than $1 each, and the aging rate(% change/decade hour)for both of those dielectrics is listed as negligible. The trade off is that they are huge in comparison to ceramics, so you are not going to just sneak one in between the pins on your pro-mini. Be sure to check the rated voltage – and don’t order them if they are rated >100v as the common 650v film caps are too big to be practical on small logger builds. For the coup de grâce you could correct away the system clock variation by comparing it to the RTC.
Addendum 2020-03-31: Small capacitors make this method sensitive to a noisy rail
After a reasonable number of builds I have finally identified one primary cause of timing jitter with this technique: a noisy regulator. To get sleep current down I replace the stock MIC5205’s on clone ProMini boards with MCP1700’s and I noticed a few from a recent batch of loggers were producing noisy curves on my calibration runs. One of them was extreme, creating >0.5°C of variation in the record:
ICU based Thermistor readings VS si7051 reference. Sensors in physical contact with each other. Y axis = Celsius
But in the same batch, I had others thermistors with less noise than the si7051’s I was using as a reference. All were using small 104 ceramic capacitors for the integration, producing relatively low counts (~3500 clock ticks) on the 10k reference resistor.
For the unit shown above I removed the regulator, and re-ran the calibration using 2x Lithium AA batteries in series to supply the rail voltage. No other hardware was changed:
Same unit, after regulator was removed. Samples taken at 1 minute interval on both runs.
In hindsight, I should have guessed a bad regulator was going to be a problem, as few other issues can cause that much variation in the brief interval between the reference resistor & thermistor readings. Reg. noise/ripple translates instantaneously into a variation in the Schmitt trigger point on the input pin – which affects the ICU’s timer count. It’s possible that this issue could be eliminated with more smoothing so I will try a few caps across the rails on the less problematic units. (~1000µF/108 Tantalums can be found for 50¢ on eBay but I will start with a 10µF/106 & work up from there)
Addendum 2020-04-05: 106 ceramic across rails increased ICU reading noise (w bad reg)
Adding a cheap 106 (Y5V ceramic) across the rails more than doubled the noise in the readings of the NTC with this ICU technique. This is interesting, as it goes completely against what I was expecting to happen. Possibly that 10µF cap was just badly sized for this job or had some other interaction via inductance effects that actually accentuated the noise? I probably need a smaller, faster cap for the job.
Changing the sampling capacitor from a 104 (0.1µF) , to a 105 (1µF) dramatically reduced the problem. Not surprising as the rail noise from the regulator is relatively consistent, while the reference timer counts change from ~3500 with a 104 capacitor to ~60,000 with the larger 105. So the jitter is still there, but it is proportionally much smaller. I’m currently re-running the thermistor calibration with that larger capacitor. If the gods are kind, the S&H thermistor constants will be the same no matter what sampling capacitor is used.
It’s worth noting that this issue only appeared with the most recent crop of crappy eBay regulators. But if you are sourcing parts from dodgy vendors, you’d best go with a 105 sampling capacitor right from the start to smooth out that kind of noise
Addendum 2020-04-06: Re-use old S&H constants after changing the sample capacitor?
After discovering the reg. issue, I re-ran a few thermistor calibrations once the sampling capacitor had been changed from a 104 to a 105: This reveals that the thermistor constants obtained with a 104 sampling capacitor, still work, but it depends on the tolerance you are aiming for: with the older 104 cap constants drifting over a 40°C range by about ±0.3 Celsius. The extra resolution provided by the larger 105 cap is only useful if you have the accuracy to utilize it (ie: It doesn’t matter if the third decimal point is distinguishable of the whole number is wrong) I generally aim for a maximum of ±0.1°C over that range, so for our research loggers that’s a complete do-over on the calibration. From now on I will only use 105 caps(or larger) with this ICU technique on regulated systems. The battery-only units were smooth as silk with smaller 104 caps because the rail had zero noise.
Addendum 2020-05-21: Using the ADS1115 in Continuous Mode for Burst Sampling
For single resistive sensors, it’s hard to beat this ICU method for elegance & efficiency. However there’s still one sensor situation that forces me to go to an external ADC module: Differential readings on bridge sensors. In those cases I use an ADS1115 module, which can also generate interrupt alerts for other tricks like ‘event’ triggered sampling.
Addendum 2021-01-24 Don’t sleep during the do-while loop with regular I/O pins.
I’ve been noodling around with other discharge timing methods and come across something that’s relevant to using these methods on other digital pins. Here’s the schematic from the actual 328 datasheet, with a bit of highlighting added. The green path is PINx. It’s always available to the databus through the synchronizer (except for in SLEEP mode?) The purple path is PORTx. Whether or not it is connected to PINx depends on the state of DDRx (which is the yellow path.)
As shown in the figure of General Digital I/O, the digital input signal can be clamped to ground at the input of the Schmitt Trigger. The signal denoted SLEEP in the figure, is set by the MCU Sleep Controller in Power-down mode and Standby mode to avoid high power consumption if some input signals are left floating, or have an analog signal level close to VCC/2.
When sleeping, any GPIO that is not used an an interrupt input has its input buffer disconnected from the pin and in clamped LOW by the MOSFET.
Clearly the ICU on D8 must make it one of those ‘interrupt exceptions’ or the cap charging method would have been grounded out when entering the sleep state. If you use a similar timing method on normal digital IO pins you can’t sleep the processor in the central do-while loop.
The datasheet also warns that on pins where this is overridden, such as external interrupts, you must ensure these inputs don’t float (page 53). The implication being that when you put anything near 1/2 of the rail voltage on the input’s Schmitt trigger this will cause current to flow into or out of the pin. Yet another source of error that gets rolled into those catch-all S&H constants.
Addendum 2022-03-09: Adding a second resistive sensor
The LDR goes to very large resistance at night, so we cap the reading at one timer overflow.
We usually integrate this ICU method with our low-power 2-part loggers as they have only a single Cr2032 as their power supply. Once the pins are pulled low these resistive sensors add nothing to the sleep current. With the stability of battery power, a 104 is sufficient for stable readings of about 6000 counts for 10k ohms. For more details see: Powering a ProMini logger for one year on a coin cell.
328p processor System Clocks & their Distribution pg26
Most micro-controllers use a quartz crystal oscillator to drive the system clock, and their resonant frequency is reasonably stable with temperature variations. In high accuracy applications like real time clocks even that temperature variation can be compensated, and last year I devised a way to measure temperature by comparing a 1-second pulse from a DS3231 to the uncompensated 8Mhz oscillator on a Pro Mini. This good clock / bad clockmethod worked to about 0.01°C, but the coding was complicated, and it relied on the ‘quality’ of the cheap RTC modules I was getting from fleaBay – which is never a good idea.
But what if you could read temperature better than 0.01°C using the Pro Mini by itself?
Figure 27-34: Watchdog Oscillator Frequency vs. Temperature.Pg 346(Odd that the frequency is listed as 128kHz on pg55?) Variation is ~100 Hz/°C
The 328P watchdog timer is driven by a separate internal oscillator circuit running at about 110 kHz. This RC oscillator is notoriously bad at keeping time, because that on-chip circuit is affected by external factors like temperature. But in this particular case, that’s exactly what I’m looking for. The temperature coefficient of crystal resonators is usually quoted at 10–6/°C and for RC oscillation circuits the coefficient is usually somewhere between 10–3/°C to 10–4/°C. There’s plenty of standard sensors don’t give you a delta that large to play with!
To compare the crystal-driven system clock to the Watchdogs unstable RC oscillator I needed a way to prevent the WDT from re-starting the system. Fortunately you can pat the dog and/or disable it completely inside its interrupt vector:
volatile boolean WDTalarm=false;
ISR(WDT_vect) {
wdt_disable(); // disable watchdog so the system does not restart
WDTalarm=true; // flag the event }
SLEEP_MODE_IDLE leaves the timers running, and they link back to the system clock. So you can use micros() to track how long the WDT actuallytakes for a given interval. Arduinos Micros() resolution cannot be better than 4 microseconds (not 1 µs as you’d expect) because of the way the timer is configured, but that boosts our detectable delta/° by a factor of four, and the crystal is far more thermally stable than the watch-dog. It’s worth noting that timer0 (upon which micros() depends) generates interrupts all the time during the WDT interval, in fact at the playground they suggest that you have to disable timer0 during IDLE mode sleeps. But for each time interval, the extra loops caused by those non-WDT interrupts create a consistant positive offset, and this does not affect the temperature related delta.
WDTalarm=false;
// Set the Watchdog timer from: https://www.gammon.com.au/power byte interval =0b000110; // 1s=0b000110, 2s=0b000111, 4s=0b100000, 8s=0b10000 //64ms= 0b000010, 128ms = 0b000011, 256ms= 0b000100, 512ms= 0b000101
noInterrupts ();
MCUSR = 0;
WDTCSR |= 0b00011000; // set WDCE, WDE
WDTCSR = 0b01000000 | interval; // set WDIE & delay interval
wdt_reset(); // pat the dog interrupts ();
unsigned long startTime = micros(); while (!WDTalarm){ //sleep while waiting for the WDT set_sleep_mode (SLEEP_MODE_IDLE); noInterrupts (); sleep_enable(); interrupts (); sleep_cpu ();
sleep_disable(); //processor starts here when anyinterrupt occurs } unsigned long WDTmicrosTime = micros()-startTime; // this is your measurement!
The while-loop check is required to deal with the system interrupts that result from leaving the micros timer running, otherwise you never make it all the way through the WDT interval. I haven’t yet figured out how many interrupts you’d have to disable to get the method working without that loop.
To calibrate, I use my standard refrigerator->freezer->room sequence for a repeatable range >30°. Since the mcu has some serious thermal lag, the key is doing everything VERY SLOWLY with the logger inside a home made “calibration box” made from two heavy ceramic pots, with a bag of rice between them to add thermal mass:
1secWDT micros() (left axis)vs si7051 °C Temp (right axis) : Calibration data selected from areas with the smallest change/time so that the reference and the 328p have equilibrated.
If you use reference data from those quiescent periods, the fit is remarkably good:
si7051 reference temperature vs 1 sec WDT micros() : A fit this good makes me wonder if the capacitor on the xtal oscillator is affected the same way as the capacitor in the watchdogs RC oscillator, with the net result being improved linearity. In this example, there was a constant over-count of 100,000 microseconds / 1-second WDT interval.
I’m still experimenting with this method, but my cheap clone boards are delivering a micros() delta > 400 counts /°C with a one second interval – for a nominal resolution of ~0.0025°C. Of course that’s just the raw delta. When you take that beautiful calibration equation and apply it to the raw readings you discover an inter-reading jitter of about 0.1°C – and that lack of precision becomes the ‘effective’ limit of the resolution. It’s going to take some serious smoothing to get that under control, and I’ll be attacking the problem with my favorite medianfilters over the next few days. I will also see if I can reduce it at the source by shutting down more peripherals and keeping an eye on stray pin currents.
Noise from si7051 reference (red) vs Cal. equation applied to raw WDT micros readings (blue).
Doubling the interval cuts the the noise and the apparent resolution in half, and if you are willing to wait around for the watchdogs 8-second maximum you can add an order of magnitude. Of course you could also go in the other direction: a quarter second WDT interval would deliver ~100 counts/°C, which still gets you a nominal 0.01°C though the jitter gets worse. Note that you can’t use the ‘b’ coefficients from one interval to the next, because of the overhead caused by the non-WDT interrupts. That “awake time” must also be contributing some internal chip heating.
The si7051 reference sensor needs to be held in direct physical contact with the surface of the mcu during the room->fridge->freezer calibration; which takes 24 hours. Since my ref is only ± 0.1ºC accuracy, calibrations based on it are probably only good to about ± 0.2ºC.
There are a few limitations to keep in mind, the biggest being that messing with WDT_vect means that you can’t use the watchdog timer for it’s intended purpose any more. (interferes with RocketScream’s lowpower library) The other big limitation is that you can only do this trick on a voltage regulated system, because RC oscillators are affected by the applied voltage, though in this case both oscillators are exposed to whatever is on the rail, so a bit more calibration effort might let you get away with a battery driven system. (and now that I think about that… if you did the temp calibration while the system was regulated, you might then also be able to derive the system voltage from the two oscillators while running unregulated.)
Self-heating during normal operation means that this method will not be accurate unless you take your temperature readings after waking the processor from about 5-10 minutes of power-down sleep. The mass of the circuit board means that the mcu will always have significant thermal lag. So there is no way to make this method work quickly and any non-periodic system interrupts will throw off your micros() reading.
Every board has a different crystal/capacitor/oscillator combination, so you have to re-calibrate for each one. Although the slopes are similar, I’ve also found that the raw readings vary by more than ±10k between different Pro Minis for the same 1sec WDT interval, at the same temperature. The silver lining there is that the boards I’m using probably have the cheapest parts available, so better quality components could boost the accuracy , though I should insert the usual blurb here that resolution and accuracy are not the same thing at all. I haven’t had enough time yet to assess things like drift, or hysteresis beyond the thermal lag issue, but those are usually less of a problem with quality kit. If your board is using Y5V caps it probably won’t go much below -15°C before capacitor failure disrupts the method.
It’s also worth noting that many sleep libraries, like Rocketscreem’s Lowpower lib, do their own modifications to the watchdog timer, so this method won’t work with them unless you add the flag variable to their modified version of the WDT_vect. To add this technique to the base code for our 1-hour classroom logger, I’ll will have to get rid of that library dependency.
Where to go from here:
Turning off peripherals with PRR can save power and reduce heating during the interval.
Switching from micros(), to timer based overflows could increase the time resolution to less than 100 ns; raising nominal thermal resolution.
Clocking the system from the DS3231’s temperature compensated 32khz output could give another 100 counts/°C and improve the thermal accuracy. My gut feeling is the noise would also be reduced, but that depends on where it’s originating.
Despite the limitations, this might be the best “no-extra-parts” method for measuring temperature that’s possible with a Pro Mini, and the method generalizes to every other micro-controller board on the market provided they have an independent internal oscillator for the watchdog timer.
Addendum:
As I run more units through the full calibration, I’m seeing about 1 in 3 where a polynomial fits the data better for the -15 to +25°C range:
si7051 reference temperature vs 1 sec WDT micros() : a different unit, but both clone boards from the same supplier
This is actually what I was expecting in the first place and I suspect all the fits would be 2nd order with a wider range of calibration temperatures. Also, this is the raw micros output – so you could make those coefficients more manageable by subtracting the lowest temperature reading from all those above. This would leave you with a numerical range of about 16000 ticks over 40°C, which takes less memory and is easier for calculations.
And just for fun I ran a trial on an unregulated system powered by 2xAA lithium batteries. Two things happened: 1) the jitter/noise in the final Temperature readings more than doubled – to about 0.25°C and 2) calibration was lost whenever the thermal mass of the batteries meant that the supply was actively changing – regardless of whether the mcu & reference had settled or not:
Red is the Si reference [left axis], Green is the calibration fit equation applied to the WDT micros() [left], and blue is the rail voltage supplied by 2xAA lithium batteries [right axis] (Note: low voltage spikes are caused by internal housekeeping events in the Nokia 256mb SD cards)
Addendum 2019-02-26
This morning I did a trial run which switched from micros() to timer1 overflows, using code from Nick Gammon’s Improved sketch using Timer 1. This increased the raw delta to almost 5000/°C, but unfortunately the width of the jitter also increased to about 500 counts. So I’m seeing somewhere near ±0.05°C equivalent of precision error – although my impression is that it’s been reduced somewhat because Timer1 only overflows 122 times per second, while the Timer0/micros had to overflow 100k times. So changing timers means less variability from the while-loop code execution. Next step will be to try driving the timer with the 32khz from the RTC…
Addendum 2019-02-27
So I re-jigged another one of Nicks counting routines which increments timer1 based on input from pin D5, using the WDT interrupt to set the interval. Then I enabled the 32.768 kHz output from a DS3231N and connected it to that pin. This pulse is dead-dog slow compared to the WDT oscillator, so I extended the interval out to 4 seconds. This long-ish sample time only produced a delta of about 40 counts/°C.
Si7051 reference temp vs Timer1 counts of 32kHz output from DS3231N (based on data selected from quiescent periods)
There wasn’t enough data to produce high resolution, but my thought was that since the DS3231N has temperature compensated frequency output, it eliminates the xtal as a variable in the question of where the jitter was coming from. This approach also causes only 2-3 overflows on timer1, so the impact of code execution is further reduced. Unfortunately, this experiment did not improve the noise situation:
DS3231 32khz clock tics vs 4sec WDT interval Raw reading jitter during a relatively quiescent period.
That’s about 8 counts of jitter in the raw, which produces readings about ±0.1C away from the central line. That’s actually worse than what I saw with the Xtal vs WDT trials, but the increase might be an artifact of the pokey time-base. The smoking gun now points squarely at variations in the WDT oscilator output as the source of the noise.
That’s kind of annoying, suggesting it will take filtering/overhead to deliver better than about 0.1°C from this technique, even though higher resolution is obviously there in the deltas. The real trick will matching the right filter with all the other time lag / constraints in this system. Still, extra data that you can get from a code trick is handy, even if it sometimes it only serves to verify that one of your other sensors hasn’t gone squirrely.
—> just had a thought: oversampling & decimation eats noise like that for breakfast!
Just fired up a run taking 256 x 16ms samples (the shortest WDT interval allowed) with Timer1 back on the xtal clock. Now I just have to wait another 24 hours to see if it works…
Addendum 2019-02-28
OK: Data’s in from oversampling the WDT vs timer1 method. I sum the the timer1 counts from 256 readings (on a 16msec WDT interval) and then >>4 to decimate. These repeated intervals took about 4 seconds of sampling time.
si7051 reference temperature vs 256x Oversampled Timer 1 reading on 16 msec WDT interval: Fit Equation
This produced 750 counts/°C for a potential resolution of 0.0013°, but as with the previous trials, the method falls down because the jitter is so much larger:
Variability on 256 Timer1 readings of 16msec WDT interval : During quiescent period
100 points of raw precision error brings the method back down to a modest ‘real’ resolution of only ±0.066°C at best. The fact that this variability is so similar to the previous trials, and that oversampling did not improve it, tells me that the the problem is not noise – but rather the WDT oscillator is wandering around like a drunken sailor because of factors other than just temperature. If that’s the case, there’s probably nothing I can throw at the problem to make it go away.
Several people pointed out that there is another wayto measure temperature with some of the Atmel chips, so I decided to fire that up for a head-to-head trial against the WDT method. Most people never use it because the default spec is ±10°C and it only generates 1 LSB/°C correlation to temperature for a default resolution of only 1°C. Some previous efforts with this internal sensor produced output so bad it was used as a random seed generator.
But heck, if I’m going through the effort of calibration anyway, I might be able to level the playing field somewhat by oversampling those readings too:
si7051 reference temperature °C vs 4096 reading oversample of the internal diode:Fit equation
Even with 4096 samples from the ADC, this method only delivered ~75 counts /°C. But the internal diode is super linear, and the data is less variable than the WDT:
Variability from 4096 ADC readings of the internal reference diode :During quiescent period
Five counts of raw variability means the precision error is only ±0.033°C (again, this becomes our real resolution, regardless of the raw count delta) . So even after bringing out the big guns to prop up the WDT, the internal reference diode blew the two-oscillator method out of the water on the very first try.
volatile uint16_t adc_irq_count;
ISR (ADC_vect) {
adc_irq_count++; //flag to track how many samples have been taken }
internalDiodeReading=0;
adc_irq_count = 0; unsigned long sum = 0; unsigned int wADC;
ADMUX = (_BV(REFS1) | _BV(REFS0) | _BV(MUX3)); // Set the 1.1v aref and mux for diode ADCSRA |= _BV(ADSC); // 1st conversion to engage settings
delay(10); // wait for ADC reference cap. to stabilize
do{ noInterrupts ();
set_sleep_mode( SLEEP_MODE_ADC ); // Sleep Mode just to save power here
sleep_enable();
ADCSRA |= _BV(ADSC);// Start the ADC
do{ interrupts();
sleep_cpu(); // Sleep (MUST be called immediately after interrupts() noInterrupts(); // Disable interrupts so while(bit_is_set) check isn’t interrupted } while (bit_is_set(ADCSRA,ADSC)); // back to sleep if conversion not done
sleep_disable(); interrupts(); // Enable interrupts again
wADC = ADCW; // Reading “ADCW” combines both ADCL & ADCH
sum += wADC; // Add new reading to the total } while (adc_irq_count<4096);//sets how many times the ADC is read
ADCSRA &= ~ _BV( ADIE ); // No more ADC interrupts after this
internalDiodeReading=(sum >> 6); //decimation turns sum into an over-sampled reading
For now, I think I’ve kind of run out of ideas on how to make the WDT method more precise..? Oh well – It’s not the first time I’ve tried to re-invent the wheel and failed (and it probably won’t be my last….) At least it was a fun experiment, and who knows, perhaps I will find a better use for the technique in some other context.
I’ll spend some more time noodling around with the internal diode sensor and see if I can wrestle better performance out of it since it’s probably less vulnerable to aging drift than the WDT oscillator. I’m still a bit cautious about oversampling the internal diode readings because the math depends on there being at least 1-2LSB’s of noise to work, and I already know the internal 1.1v ref is quite stable..? I threw the sleep mode in there just to save power & reduce internal heating, but now that I think about it the oversampling might work better I let the chip heat a little over the sampling interval – substituting that as synthetic replacement for noise if there is not enough. The other benefit of the diode is that it will be resilient to a varying supply voltage, and we are currently experimenting with more loggers running directly from batteries to see if it’s a viable way to extend the operating lifespan.
Addendum 2019-04-05
So I’ve kept a few units perking along with experiments oversampling the internal diode. And on most of those runs I’ve seen an error that’s quite different from the smoothly wandering WDT method. The internal diode readings have random & temporary jump-offsets of about 0.25°C:
si7051 (red) reference vs internal Diode (orange) over sampled 16384 reads then >>10 for heavy decimation.
These occur all all temperatures from +20 to -15C, and I have yet to identify any pattern. The calculation usually returns to a very close fitting line after some period with the offset with about 50% of the overall time with a closely fit calibration. This is persistent through all the different oversampling intervals and stronger decimation does not remove the artifact – it’s in the raw data. No idea why…though I am wondering if perhaps the clocks entropy drift is somehow quantized?
This tutorial is the second in a series on adding displays to expand the capability of the Arduino data loggers described in our SENSORS paper earlier this year. As more of those DIY units go into the field, it’s become important that serial #s & calibration constants are embedded inside the files produced by each logger. One can always hard-code that information, but with multiple sensors and screens for live output, space is getting tight:
This prompted me to look at the ATmega328p’s internal EEprom as a potential solution. The EEprom can only store one byte per location, so saving numbers larger than 255 requires you to slice them up and store them in consecutive memory locations. That takes two locations for a the “high byte” and a “low byte” of an int, and four memory locations for longs & floats. That’s a little clunky but it mirrors the way you read & write high-bit registers on I2C sensors, so there are lots of code examples out there to follow. Piece by piece approaches also require memory pointers for retrieval and re-assembly of your variables.
A more elegant method is to roll them into a single ‘struct’ and store that with a generic function, but even with read_block & write_block I’d still be tweaking the code for each logger, since they often have dramatically different sensor combinations. I wanted a more generic “cut & paste” method that would handle file headers and variable boiler plate info ( contact emails, etc. ) without me having to update a bunch of pointers. The real clincher was realizing that the equation constants generated by my thermistor calibration procedure had too many significant figures to store in an Arduino float variable anyway.
A char array provided the simplest way to achieve the flexibility I was after, and I wrapped that up into a little “EEprom loader” utility: (Note: Github link at the end of this post)
#include <EEPROM.h>
#define PADLENGTH 1024
char eepromString [PADLENGTH+1]; //+1 to leave room for null terminator
void setup(){
strcpy(eepromString," ");strcat(eepromString, "\r\n"); // a carriage return in Excelstrcat(eepromString,"#198, The Cave Pearl Project");
strcat(eepromString, "\r\n");
strcat(eepromString,"Etime=(raw*iSec)/(86400)+\"1/1/1970\"");
// NOTE: \ is an escape which lets you put "special" characters into the string
strcat(eepromString, "\r\n");strcat(eepromString,"Tres=(SeriesOhms)/((((65536*3)-1)/Raw16bitRead)-1)");strcat(eepromString, "\r\n");strcat(eepromString,"1M/100k@A7(16bitP32@1.1v),A=,-0.0003613129530,B=,0.0003479768279,C =,-0.0000001938681482");strcat(eepromString, "\r\n"); // following this pattern, simply add more lines as needed (up to max 1024 characters)// This fills the remaining unused portion of eepromString with blank spaces:int len = strlen(eepromString); // strlen does not count the null terminator
memset(&eepromString[len],' ',PADLENGTH-len);
// Now write the entire array into the EEprom, one byte at a time:
for (int i = 0; i < 1024; i++){
EEPROM.update(i, eepromString[i]);
}
}
void loop() {
// nuthin here...
}
This is just a bare-bones example I whipped up, and it’s worth noting that strcat will overflow if you try to add more characters than eepromString can hold. You can check your count at sites like lettercount.com or try using snprintf() as an alternative without that problem. Since this is a ‘run-once’ utility, I haven’t bothered to optimize it further. Bracketing calibration numbers with a comma on each side makes them directly usable when the CSV data file is loaded into Excel later because they end up inside their own cell. It’s a good idea not to avoid putting EEPROM.write codes inside the main loop, or could accidentally burn through the limited write cycles of your internal EEprom. EEPROM.update is the safer than EEPROM.write, because it first checks the content of each memory location, and only updates it if the new information to be stored is different.
With the data stored in the EEprom, it only takes a few lines to transfer that information to the SD card when a new file gets created:
char charbuffer[0]=" "; //a one character buffer
file.open(FileName, O_WRITE | O_APPEND);
for (int j = 0; j < 1024; j++) {
charbuffer[0] = EEPROM.read(j);
file.write(charbuffer[0]);
}
file.close();
The spaces used to pad out the array so that it fills all 1024 bytes of the EEprom do create an extra blank line in the file, but that’s a pretty harmless trade-off for this level of code simplicity.
What else can we store in the EEprom?
In the post covering how to drive a Nokia5110 LCD using shiftout, I went into some detail on the way the fonts for that screen were created and displayed. Three simple cascading functions (originally from Julian Ilett) let you send any string of ASCII characters to the screen.
void LcdWriteCharacter(char character)
{for(int i=0; i<5; i++){LcdWriteData(pgm_read_byte(&ASCII[character - 0x20][i])); }
LcdWriteData(0x00); //one row of spacer pixels between characters
}
void LcdWriteData(byte dat)
{digitalWrite(DCmodeSelect, HIGH); // High for data
digitalWrite(ChipEnable, LOW);
shiftOut(DataIN, SerialCLK, MSBFIRST, dat); // transmit serial data
digitalWrite(ChipEnable, HIGH);
}
I modified that code that slightly with reduced font-set arrays stored in PROGMEM and introduced a method for displaying larger numbers on screen by printing the upper and lower parts of each number in two separate passes. PROGMEM requires pgm_read_byte to get the data out of the 2-D arrays for printing.
Now, with a little bit of juggling, a “loader” script can store that font data in the 328P’s internal EEprom by converting the two dimensional font array into a linear series of memory locations:
const byte ASCII[][5] PROGMEM =
{
{0x00, 0x00, 0x00, 0x00, 0x00} // 20
,{0x00, 0x00, 0x5f, 0x00, 0x00} // 21 !
,{0x00, 0x07, 0x00, 0x07, 0x00} // 22 "
,{0x14, 0x7f, 0x14, 0x7f, 0x14} // 23 #
,{0x24, 0x2a, 0x7f, 0x2a, 0x12} // 24 $
,{0x23, 0x13, 0x08, 0x64, 0x62} // 25 %
,{0x36, 0x49, 0x55, 0x22, 0x50} // 26 &
,{0x00, 0x05, 0x03, 0x00, 0x00} // 27 '
,{0x00, 0x1c, 0x22, 0x41, 0x00} // 28 (
,{0x00, 0x41, 0x22, 0x1c, 0x00} // 29 )
,{0x14, 0x08, 0x3e, 0x08, 0x14} // 2a *
,{0x08, 0x08, 0x3e, 0x08, 0x08} // 2b +
,{0x00, 0x50, 0x30, 0x00, 0x00} // 2c ,
,{0x08, 0x08, 0x08, 0x08, 0x08} // 0x2d (dec 45) (-) in row 13 of source array
,{0x00, 0x60, 0x60, 0x00, 0x00} // 2e .
//...etc, a complete font is in the array, but I only store a 46 character "caps only"
// subset in the eeprom ranging from (-) at array row 13 to capital (Z) =array row 59
// this takes 235 bytes of EEprom memory storage (addresses 0-234)
}
// move that sub-set of the font (13 to 59) from PROGMEM into the 328's internal EEprom
currentIntEEpromAddress=0; // each letter is constructed from 5 byte-collumns of data
for(int i=13; i<59; i++){ // so each i value forces a 5-byte jump in EEprom address
for(int j=0; j<5; j++){ // while j value counts through each individual column
charbuffer=pgm_read_byte(&ASCII[i][j]); // i&j used separately in the array
currentIntEEpromAddress=(((i-13)*5)+j); // i&j combined for the EEprom addressEEPROM.update(currentIntEEpromAddress,charbuffer); //update to save unnecessary writes
}
Once the font has been transferred into the internal EEprom by the loader program, I only need to make two small changes to the original display functions so they pull that font from the EEprom. Note the calculation trick (character-0x2d) which uses the ASCII code for each character to calculate where the first of the five bytes is located for that character:
void LcdWriteCharacter(char character)
{for(int j=0; j<5; j++){LcdWriteData(EEPROM.read(((character -0x2d)*5)+j));
//we subtract 0x2d because the zeroth position in EE memory is (-) = ASCII(0x2d)}
LcdWriteData(0x00);
}// (character - 0x2d)*5 jumps 5 bytes at a time through the EEprom address space
// Add a fixed offset for other fonts stored at higher locations in the memory:
// eg: LcdWriteData(EEPROM.read(235+((character -0x2d)*5)+j));
// the big#fonts are 11 byte-columns wide, so calc = (Offset+((character -0x2d)*11)+j)
This adds some delay, but because the Nokia 5110 is already dead-dog slow it’s not even noticeable. A similar mod gets applied to the split print functions for the big number font, and moving both to the EEprom still leaves a very serviceable 500 characters for file header info. The trick to stacking different kinds of information in the EEprom is figuring out what the resulting address offsets are so each reading function starts at the correct memory address. You skip over the memory locations holding the font data when transferring that file header text.
With fonts stored in EEprom, the remaining Nokia 5110 functions compile to just a little over 400 bytes of program storage and 10 bytes of dynamic(excluding EEPROM.h, which gets called by some of my other libraries even if the screen is not present) That’s with three duplicate copies of each LCD function because I’ve used a set for the standard size font, and two more for printing the large numbers in upper & lower rows. With a bit more optimization I could get that down to about 200 bytes, which I suspect is probably the smallest memory footprint achievable for adding live data output to my loggers.
I’ve posted a ‘EEprom loader utility’ which demonstrates the dual Font/Text approach at our Project’s GitHub repository so people can modify it to suit their own projects. On loggers with a number of DS18b20 sensors, I’ll use a similar approach to store the sensor bus addresses, which I usually store in two dimensional arrays very similar to those shown here for screen fonts.