Category Archives: Expedition Reports & Updates

Photos, videos & reports from the field.

Field Report: 2015-03-23 Flow Sensor “drag fins” tested

Our deployments last year saw only modest instrument response in slower systems, especially those where the water was flowing slower than 1 cm/second. Most of the deep saline circulation fell into this category, and we really wanted better data from those caves. So I came up with a add-on attachement for the flow meters, hoping to dramatically increase their surface area without affecting buoyancy that much.

Officially this was an introduction to beach facies mapping, but it looks more like geosci Kung Fu to me.

Technically, this was an introduction to mapping beach deposits, but to me it looked like geo-scientist Kung-Fu

I had a couple of these new fins on this trip and I asked my wife, who was busy leading the Northwestern University earth science students around the peninsula, when I might sneak away from the group for a few hours to see if they actually worked. She suggested instead that we do an actual deployment, using the opportunity to expose the undergrads to the aspects of real underwater fieldwork.  I was instinctively cautious about this idea, having seen a fair number of tech demos go wrong over the years, but I have also come to realize that Trish’s enthusiasm is an unstoppable force, so we added the dive to a schedule that was already bursting.

The new "parallel" anchor rig make the differences between the instruments obvious...

The new “parallel” anchor rig made it easy to see differences in instrument response from the last deployment. It’s hard to achieve consistent results with all the changes I make from one generation of underwater housings to the next.

With all the other activities on the go, it was mid- afternoon before we actually donned our gear, while answering questions from the students about the double tanks, and doing little demos of the other cave-diving kit. Then we waddled off to the waters edge, festooned with the mesh bags of loggers, cables, and other bits that accompany a typical deployment. I’m not saying we looked bad, but it was probably clear to the students that we weren’t going to peal off and reveal a freshly pressed tuxedo after the dive 😉  Once in the water, we had a short swim along an open channel to the cave entrance with a gaggle of snorkeling students following our progress at the surface. One of our primary lights started acting a bit flaky on the way and we had another impromptu Q&A session floating among the mangroves as one student paddled back to fetch our spare.  A bother, but it did put a point on what we had said earlier about all the redundant equipment we were carrying. When the extra light arrived, we started the dive, and I made a note to myself to take more photos than usual for the debrief that would occur at the end of the day.

Here I am adjusting the ballast on one of the flow meters before deploying it.

Here I adjust the ballast mass on one of the flow meters before installation (with the new drag fin in the foreground)

Once on site, Trish set our mesh bags up in a little work area, and I swam out for the usual round of inspections. North? check. Epoxy OK? check. Vortex shedding? etc. Once that was in the dive notes, I began the one-by-one exchange of the old units.  The indicator LED’s pipped right on schedule, telling us that we had no epoxy failures this time round. Once all the flow sensors had been replaced, I took a few photos and noted that the unit closest to the main line was not being deflected as much as the other sensors. I then added the new drag fin to that heavier unit.  I also had a pressure sensor to install, and while I switched that out I could see that the sensor with the new drag fin was now almost horizontal compared to the other sensors:

I don’t know about you, but I am calling that a success.  In faster systems the fin might clip the high end, although the cross sectional area now changes quite a bit as the unit approaches 90 degrees. Any approximation with drag on a sphere has also gone out the window, but I already knew that empirical testing was going to be necessary to get point velocities. As I refine this idea, I will come up with different sizes, and integrate the baffles more elegantly with the ballast mass adjustment. Wheels are already turning in my head with possibilities.

Addendum:

As part of the extra video we captured for the students, I recorded a short clip of our exit from the cave. With the water at high flow, there was significant mixing at the fresh/salt water interface, producing an optical consistency similar to salad dressing. This is limited to the mixing zone region, and you can see this when I placed the camera below the level of the interface where it obtains clear visibility again.  While cave divers run into this kind of thing frequently, it’s probably something that regular divers don’t experience very often. So I thought I would post the clip just to show people what it was like:

<— Click here to continue reading the story—>

Field Report: 2015-03-18 One logger sacrificed to the sea gods.

DIY Cave Pearl data loggers based on Arduino Microcontrollers

The B4 unit has been a star performer despite the fact that it was one of the earliest logger units I ever built. It has been running continuously since its first cave deployment in March 2014.

I was happy to see Gabriel from CEA the next morning to discuss retrieval of the open water units, but he delivered some unwelcome news to go with my morning coffee: the logger at the mouth of Yalku Lagoon had gone missing. Loosing the unit itself was irritating, but loosing four months worth of data – that hurts!  Another pivot joint on one of the loggers in the bay had failed the week before, and one of the reef volunteers spotted that unit while it was still hanging from the backup bungee. I received that news before we headed south so I had quickly crafted some stronger universal joints from pvc to fix the problem. It was salt in the wound to know that these new pivots were sitting in my suitcase, having arrived a day or so too late to save the Yalku unit. Darn!

Oh well, we can only try again, and there is some solace in the fact that we are not the only ones to see equipment suffering this fate.  There is still a small chance that someone will pick it up further down the beach, try to figure out what it is on Google, and send us an email to say they still have the SD card. From this point forward, I will be labeling the inside of my loggers as well as the outside, and I will add a little “If found please email…” blurb into the data files.

Marco cuts B3 from it's mooring

Marco cuts B3 from it’s mooring on the south side of the bay. Marco has been doing regular checks on the loggers since the beginning of the open water experiment.

Gabriel had meetings to attend, so Everett, Marco and I popped on some fins and swam out to recover the units in Akumal bay.  As usual they were covered with a crop of algae & other critters but both B4, and the B3 unit that I rebuilt on the last trip, were still running smoothly. The unit in shallow water was so encrusted that I told Marco to pull the whole assembly, including the anchor plate, because there was no way to inspect it through all the accumulated cruft. That bio-fouling likely increases the drag and the buoyancy of the meter over time.

 

I left one of the new pivot joints on the B4 anchor plate. Hopefully these are robust enough for the constant wear and tear.

This is one of the new universal joints – hopefully robust enough to save us from more losses.

These two loggers have now been in the open ocean for seven months, and once ashore I began a familiar routine, cleaning them with green scrubby pads and copious amounts of rubbing alcohol.  The Loctite E-30CL epoxy on the LED’s is holding up well although the JB weld on one of the DS18b20 temperature sensors definitely has little patches of rust showing through. The stainless steel ballast washers appear to have fused together, but the O-rings are still looking good. The nylon bolts are sounding crunchy, perhaps indicating that they are starting to get brittle so I might replace them on the next round.  Now that the new pivot joints are more robust, I probably need to think about upgrading the rest of the connections as well.

 

JB weld on one of the DS18b20 sensors

The JB weld on the DS18b20 sensors is getting a bit crusty.

Once the data was downloaded, I reset the RTC’s, and checked that the sleep currents were still the same as they were in December. These units have Tinyduino stacks, so they run with fairly high sleep currents. (around 0.7 mA) With six new AA’s in the power module they should still deliver 6-9 months of operation.  After adding a fresh desiccant pack they were sealed & ready to deploy with code that crops the highest and lowest off of 13 readings, spaced 8 seconds apart. I average the remaining 11 readings to filter out the high frequency wave turbulence, and get at the underlying flow direction. So far this approach seems to be working well.  I also gave Gabriel a new logger to replace the one we had lost at Yalku lagoon Hopefully the sea gods will smile upon this new deployment.

<— Click here to continue reading the story—>

Field Report: 2015-03-17 Drip Logger Service Visit

Everett was one of the grad students who also came down to help with the N.U. trip.

I pressed Everett (a Northwestern grad student) into helping me with the manual counts when I collected the drip loggers.

In March my wife led a Northwestern University earth sciences trip to the Yucatan Peninsula. While she was busy with all the necessary preparations for the students who would be arriving shortly, I slipped away for a couple of hours to retrieve the loggers we left at Rio Secreto last year. With so many new units in that deployment, I was really chomping at the bit to see how they faired.

As usual we had some good news, and some bad news from the deployed loggers. Actually, we probably set a new record on the bad news side of things as the two relative humidity loggers that I cobbled together before the December trip went bananas as soon as we brought them into the caves. The HTU21D sensor on unit 030 died on Dec. 20th, one single day after it was deployed, while the sensor on 028 lasted for four days before it pooped out.  Both delivered crazy readings the whole time even though they seemed to be working fine on the surface.

capt

Even potted under epoxy, the solder contacts were severely oxidized.  I suspect that moisture was able to “creep” along the surface of the breakout board, because of the area exposed around the humidity sensor.

The epoxy in the sensor wells had turned yellow & rubbery, even though it was clear and rock hard when the units were deployed . But these sensor caps were assembled just days before the flight, and I used 5 minute epoxy to speed the build, rather than the usual 30 minute stuff. So I am thinking that the moisture resistance of the faster curing epoxies is much lower. Perhaps it’s time for me to investigate some new urethane options with lower permeability? It is also possible that PVC solvent residue interfered with the epoxy’s chemistry because I built them so quickly.

 

Dispite its "splash proof" rating, this MS5805-02 quit after one month in the cave

Despite its “splash proof” rating, this MS5805-02 died after one month  in the cave. It had no “direct” water contact.

The loggers kept running after the R.H sensors stopped working but they eventually both quit long before draining the AA battery packs, which leads me to conclude that rusty contacts eventually shorted the I2C bus, preventing the RTC alarms from being set. We also lost one of the pressure sensors, and a TMP102 board. In fact the only sensor still fully operational when I pulled the loggers was the MS5803-02 pressure sensor, once again showing just how robust those pressure sensors are under their white rubber caps.

 

PressureSensorOnPalaparoof

The white ball is an older first gen housing for an underwater pressure unit, and the black cylinder above is a drip sensor, acting as a crude rain gauge. I don’t know who collected the rain water.

I left a new RH&Pressure unit in the cave, which was made with E30CL and had more than a month of test runs under its belt before going into the field. Even with fully cured epoxy, there is still the possibility that moisture will penetrate through the exposed RH sensor, so I will look into moving that sensor off the main housing for my next builds.

We also had some sensors on the surface during this last deployment, and they faced dramatically different challenges under the full tropical sun.  The pressure logger had been re-purposed from a four month cave deployment. It sported a DS18b20 temp sensor, and an MS5803-05 pressure sensor, which both performed beautifully in the underwater environment.

But as you can see from the pressure record (in mBar) things did not go so well this time around:

DIY Cave Pearl data loggers based on Arduino Microcontrollers

I was expecting daily fluctuations of a few millibars so there is no way I can believe that local highs reached 1200 mBar…but what happened?  This pressure sensor had been used first for an under water deployment so it had a layer of Qsil silicone over top if it. This caused a -50 mbar offset, but did not seem to give us any other problems in the thermally stable cave environment.  But with full sun exposure this logger saw huge daily temperature variations (detailed below) I believe this caused the silicone above the sensor to expand and contract; exerting enough physical pressure to overwhelm the more subtle barometric readings. Unfortunately I did not have time to look at this data while we were in the field, so the unit was redeployed, although this time in a more sheltered spot under the palapa roof.

Now for the good news:

The drip sensor which we left beside that pressure logger on the surface delivered a respectable record despite the fact that it had no collecting funnel:

DIY Cave Pearl data loggers based on Arduino Microcontrollers

That peak of 8000 counts (/15 min.) is about 9 drips per second on the surface of the unit which, with all the delays I put in the code to suppress double count artifacts, might be approaching the max response of the sensor itself. With no way to capture the water, gentle foggy rain events would not have triggered the impact sensor, so there is a good chance that a significant amount of precipitation did not get recorded. But what makes this record so impressive to me is the RTC temperature log from inside the housing:  (in °C)

DIY Cave Pearl data loggers based on Arduino Microcontrollers

The black end cap actually started melting grooves into the white PVC of the drip logger housing.

The black end cap actually started melting grooves into the white PVC of the drip logger housing.

The spec sheet maximum for the DS3231 is 70°C,  and the Arduino mcu‘s limit is 85°C.  Even so, with daily peaks reaching nearly 60° I am quite surprised that the batteries did not pop.  The little logger did not escape this trial by fire completely unharmed, as the micro SD card went from a nice low current sleeper to pulling around 1 mA all the time. The data was intact, but I can only surmise that the high temps cooked some of its control circuitry. The upper ABS surface also changed from a neutral frosted white to a slightly fluorescent green/yellow color, presumably because of intense UV exposure. After replacing the batteries & SD card, the unit was put back on roof for another run.  Just to be on the safe side I added a second unit in case that first one gives out.

While I leave heavy weight analysis of the hydrographs to the expert on the team, I couldn’t help peaking to see if these surface storms affected the in-cave drip flow rates. I was quite surprised to see that the precipitation events had small effects on some of the counts, while barely registering as a blip on others that were quite nearby. This is the record from DS20 (15 min bins, with a purple overlay of surface record that is not on the same scale):

DIY Cave Pearl data loggers based on Arduino Microcontrollers

And this is the record from DS02, located less than 5m away in the same chamber:

DIY Cave Pearl data loggers based on Arduino Microcontrollers

Given the thin soils of the area,I suspect that much of that brief rain evaporated shortly after surface contact, or the dry season vegetation was just sitting there like a sponge, able to absorb most of it quickly.

The whole group of loggers represents a mixed bag of first and second generation builds with many different “mini” form factor Arduino boards in them. I left the batteries in a couple of units back in December so I could see some longer term battery discharge curves:

DS01_02_Longruntest

These two units were using three lithium AA’s, which I knew from the 1st generation test results, are about 2/3 depleted when they hit that 5000 mV shoulder. This tells me that DS01 would probably have delivered nine months of operation on these cells. This is very good news because even the loggers I built with no-name eBay clones (MIC5205 v.regs) sleep around 0.33 mA if they have good SD cards. So it should be safe to put them on a six month rotation schedule.

In addition to their drip counts, several of the loggers were running with different eeprom buffering levels to help me understand how this affected the power budget. I won’t wade into all of that data here but two of the most directly comparable records are from units 26 &  27:

Logger # starting voltage sleep current
# records buffered  V.drop/8500 records
26 5243 mV 0.28 mA 512 30 mV
27 5198 mV 0.26 mA 96 33 mV

Unit 26 was handicapped by a slightly higher sleep current and a starting voltage above the lithium plateau (I often see a small quick drop on 3xAA lithiums above 5200 mV) The fact that it still delivered a smaller voltage drop on the batteries over the three month run implies that increasing the size of the eeprom buffer does improve performance. Logger 26 had a 32K eeprom so it only experienced 16 SD card writing events, while the smaller 4K buffer on unit 27 required 87 SD writes.  Both loggers created six separate log files during the run and the cumulative drip counts were comparable.  It’s still a close call, and the increased buffering does not providing a huge savings, perhaps on the order of 5-10%.  Since the extra I2C eeproms only cost $1.50, and the coding to support them is trivial, I consider that to be an easy way to get another month of run time. As with the buffering tests I did back in 2014, it’s clear that all those eeprom page-writes (3mA x 5msec  + mcu uptime) take a significant amount of power. But at least they are not subject to the random latency delays you see with SD cards.

I added larger LED limit resistors to each logger on this service visit, so even if the drip rates pick up dramatically during the wet season, power used by the interrupt will be reduced compared to the runs since August. All units that were capable are now buffering about five days worth of data to the eeproms. The current crop of “best builds” with Rocket scream boards and pin-powered RTC’s, are getting down to 0.15 mA, implying that they should be good for a year long deployment provided the SD cards & sensors hold out. Of course I don’t count chickens anymore, no matter how good the numbers look. Though all units delivered a full data set on this deployment, two of them suffered dramatic increases in sleep current when their ADXL’s suddenly started consuming more power. You can easily spot when these kind of sensor failures occur by looking at the power supply voltage log:

020_SleepCurrentChange2

I am sure there are more gems buried in the data, which I will post here as they are uncovered.

<— Click here to continue reading the story—>

Field Report 2014-12-20: Our 2nd Drip Sensor Deployment at Rio Secreto

Now that we had all the flow sensors under water, the last big item on the to-do list was the installation at Rio Secreto. New memory cards brought all the 1st generation units down to a more reasonable 0.33 mA sleep current.  The new builds were even better, coming in around 0.22 mA, and they have 32k eeproms allowing them to buffer up to five days worth of data. Not wanting to risk too much on an untried idea, only three of the new builds are pin-powering the RTC, allowing them to get below 0.15 mA between drip counts. I have high hopes for this whole generation, and each has slightly different settings, to help me zero-in on an optimal approach to the eeprom buffering.

Ben Schwartz, Edward Mallon, Fernanda Lasas, Aubrey X

Ben Schwartz, Ed Mallon, Fernanda Lasas, Aubri Jensen

We were very happy to have other cave researchers join us for the day. It never hurts to have extra people on hand for the manual counts, because recording those initial drip rates the “old fashioned way” can take a substantial amount of time. It’s also good to have a fresh set of eyes on the installation, as I often miss things that are embarrassingly obvious while I juggle the low level details about each sensor in my head.  Being able to point out errors like that, with a sense of humor like Ben’s, is one of those vital fieldwork skills that never seems to make it onto the resume, but probably should. Of course, when you bring a bunch of karst researchers together in a beautiful system like Rio Secreto, the conversations start to sound a bit like this. (just substitute in the word “stal”)  I noticed a distinctly inverse relationship between our progress through the cave, and the number of syllables per second  😉

Here Aubrey and Ben are timing drips at a typical cluster.

Aubri and Ben record drip intervals at a 3-unit cluster.

The first chamber (which we started calling Fernanda’s Lab) had a beautiful stalagmite garden on offer, and we tried to cluster drip sensors on actively growing stalagmites of different heights and shapes.  Understanding how drip dynamics and vadose zone chemistry contribute to stalagmite growth is something of a holy grail for my wife, as it underpins all the paleoclimate records that people derive from cave formations. Of course if figuring that out was easy, someone would have done it already.  So we set to work locating drips with the “right stuff”.  Five units ended up in the stall garden, with another cluster off at a more distant pool.  Units 04, 05 and 07 were replaced in their former locations with more robust anchoring this time, as these locations had been washed out by a high water event during the last deployment.

Hopefully the ledge will give some measure of protection

I put the new babies under a dry ledge to keep liquid water away from the humidity sensors.

The last data set gave us a gorgeous pressure & temperature record from the MS5803-02 and I continue to be impressed by sensors from Measurement Specialties.  This time round we had two pressure & RH sensors to deploy in the cave, but I have no idea if those HTU21d’s will survive such long exposure to the high humidity cave environment.  The  5bar pressure logger that we patched up with glue was going to be deployed on the surface, with a drip sensor beside it acting as a crude rain gauge so we can get a better sense of what the cave drips are responding to. By the end of the day we had fifteen instruments on site, making this our largest installation ever!

<— Click here to continue reading the story—>

Field Report 2014-12-18: A Water Flow Sensor Co-deployment.

Heh gringos! No Cueva!

No cueva aqui…

Over the next couple of days, we managed to double the number of flow sensors in the core systems along the transect that Trish was studying as part of a biology project headed up by Dr. Fernando Alvarez from UNAM in Mexico City. We revisited our test rig at the coastal site, leaving a set of three flow units there along with our 2 Bar pressure sensor. All had been tested showing good sleep currents, so hopefully we would not loose our data this time.  Those dives went smoothly, and we ended up with a rare day on our fieldwork schedule that was not already booked.

We knew there was a system in the area were some biologist friends had a Lowel Instruments flow meter (the first time I have seen a commercial unit use the tilt/drag principal…) installed beside their monitoring equipment. But the meter was scheduled for removal in January of 2015 – so this opportunity was brief.  If we could make it out, a co-deployment would give us a chance to compare point velocity numbers from a commercial unit with our DIY project data.  On the down side, it was quite a haul to get out there, on a road that our little rental car was not really capable of handling.  But, thanks to my marathon rebuild session, we still had three flow sensors that we could deploy…

We decided to go for it, and just bring everything along in case the system looked interesting. Thirty minutes of slowly bumping, clunking and screeeeeeeeeetttcccching past the machete stumps, and we made it to the walking trail. From there we still had a good march out to the cenote, so we donned our gear and set out with doubles on our backs; placing each foot as carefully as we could on the uneven trail, while also trying to move fast enough to outpace the mosquitoes.  By the time we reached the water, it was getting pretty hot in those wetsuits!

Yep, we were not going to smell pretty after a dive in this stuff.

Yep, we were going to smell mighty good after this dive.

After we had cooled off , we did the pre-dive checks, secured our mesh bag full of sensors, and set off along the line.  And the water was…. brownish green…(?)  Much more than the other cenotes we had been diving in. But then I noticed that some of those bits of perc were actually moving around, under their own steam, and I understood why a group of biologists would select a site that was so darned awkward to get to. It was heaving with little critters!  When we found the site they had chosen for their monitoring equipment it was at the top of a trapped dome, probably perfect for monitoring things like nutrients, but almost certainly with too little flow for my Pearls to register. I set up a sensor anyway, thinking that if it was a zero flow location, we were not going to have much data to work with for that calibration. Who knows, perhaps we would get lucky and catch a big rain event flushing out the system during the overlap of the two meters?

Then we explored further into the cave.  The line soon descended below the limits of our humble point and shoot camera (so we had to leave it behind), but the visibility opened up below the tannic water showing a spectacularly wide passage, with the fresh/saline transition smack-dab in the center. Trish immediately became very excited, pointing out that there were ripples in the sediment.  We swam further into the passage, and she hand signaled her intent to find a location for a dual installation with our last two units.  I waited on the line while she searched, watching the slowly undulating halocline as it scattered our dive lights across the walls of the passage like a fun-house mirror. Once a site was selected, we configured one unit as a float, for installation on the floor of the passage, and directly above that we hung our last unit from the ceiling; up in the fresh water zone. After an inspection swim to check connectors & compass bearings, we shook hands in an exaggerated ceremony before following the jump reel back to the main line. I don’t think we could have picked a better place for our last installation of the trip if we tried. Worth the hike, and the bugs, and all the scratches & dents we put on that rental car…which now all seemed quite minor…really…no problem at all 😉

<— Click here to continue reading the story—>

Field Report 2014-12-16: Rebuilding & Reloading the Old Sensors.

The depth limits on our point & shoot camera prevented us from capturing photos from the dives. But a pile of 1st & 2nd gen units was slowly accumulating on the floor in our room...

1st & 2nd generation loggers were accumulating rapidly…

Each of the next few days started with a dive to replace old units that had been installed in deeper passages along System Ox Bel Ha, with the new generation of flow sensors.  To avoid the ballast problems we had on the last round of salt water deployments,  we decided to adjust the buoyancy of each flow sensor in-situ, while it was hanging from the support rods. The connectors themselves contributed varying amounts of negative buoyancy depending on their distance from the pivot joint, and some of the deep sites needed up to four rods (~2m) to get the flow meter into the right location in the water column. This required more time at depth than I would have liked the first time we tried it, but over the next few deployments we got reasonably good at weighting the units so that they were sensitive to the gentlest water movements. I need to put some more thought into making this procedure easier to do.

And we knew how important this fine-tuning was in the deep saline zone because each unit we downloaded told us that the August flow sensors were far heavier than they should have been. Ten grams of negative buoyancy is fine in a coastal discharge that races along at 15 cm/second, but when the fastest flows are below 1 cm/s, the pearls needed to be as light as a feather.  Semi-diurnal tides that jumped off the screen when we plotted the data from high flow sites, barely rose above the ADC noise in systems like Maya Blue and Jailhouse.  Of course there were more epoxy failures, and we continued to see units brought down by fake SD cards. The combination of these factors meant that we lost most the data from the last generation of flow sensors. I will never trust retail packaging for electronic components again.

Each rebuilt unit needed a 24 hour test run...

Each rebuild needed a few days of testing to catch code bugs…

And for the first time, we had so many sensors returning from deployments that refurbishing & reloading them was turning into a major part of the trip logistics.  That sounds pretty obvious in hindsight, but I was so used to having the opposite problem: where we concentrated on squeezing every possible  dive out of the “precious” YSI Sonde or Hydrolab, that having to triage old data loggers had never happened to us before. I started migrating parts from the younger units with failed epoxy, into the older generation builds with sound housings. Then every logger had it’s SD card replaced with ‘good sleepers’, and I tested them over again… just to be sure.  I completely rebuilt two of the Beta generation units for CEA’s open ocean deployement, and finally got around to putting the bma250’s they carried into a low current sleep mode.  I even melted grooves into the housings so that Marco could check the sensor orientation “by feel”, after they turned into floating algae farms.

Good enough for a "surface" deployment

I hope it is sealed well enough for a surface deployment…

Things proceeded well: All clocks on UTC? Check.  Replace old style battery connectors? Check. Good data saves from test runs? Check. Every few hours saw another unit up and running with reasonable sleep currents.  But the failed pressure sensor posed a bit of a problem. Bad epoxy or not, we needed two pressure units running so we could subtract the barometric from the combined signal that the under water units were reading. In the end I decided to re-submerge the older 2bar unit I built back in March, despite the fact that it had already done a long stint underwater, and I would leave the newer 5-bar pressure unit on the surface after sealing the hole with some glue from the local hardware store.

I was so zoned getting all these little Frankensteins going that for a while I lost track of the days.  I think it rained…or maybe it was sunny…because I was in Mexico…right? Fortunately while I was going non-verbal, Ben Schwartz and his crew of avid cavers arrived in Akumal. Being somewhat occupied, I hardly noticed the time Trish spent talking to them about to the region, and it’s wonderful cave systems.  They got the two-penny tour of our humble field station and endured my Cave Pearl “elevator speech”, which was still embedded in my brain from the GSA. Good thing too, as scripting & sleep deprivation had crowded out most of my other brain functions by that point.

And at night our room lit up like a Christmas tree every fifteen minutes, because all the little LED heartbeats were blinking in rough unison as loggers ran their overnight tests…

<— Click here to continue reading the story—>

Field Report: 2014-12-12 Retrieve the Costal Discharge Flow Sensors

The C generation units ready to come home.

The ‘C’ generation units, ready to come home.

We planned on retrieving the deeper system units first, so after our customary visit to Bil’s dive center in Tulum, we headed out to one of the sites that our friends Jeff & Gosia had installed for us back in October.  But a cracked sleeve on one of the high pressure hoses stopped the dive while we were still dry, and we spent a couple of hours hunting for a replacement in town.  By the time we were ready to go again, a long dive was out of the question. So we chose instead head over to our primary test site on the coast. It was a short, shallow dive, and I had a new suspension rig that I was keen to put on the ceiling of the cave to bring those flow sensors closer together.  We only had one new sensor ready for the site, but we could always swing by later to put the other units in.

Uh oh, what is that brown stain on the temperature sensor?

Uhhh, what is that brown stain on the temperature sensor?

The tide was with us, and we were at the site moments after leaving the surface. I did the now routine inspections, noting a bit more wobbling than I wanted to see on the suspension rods. I also spotted some discoloration on the white thermal-conductive epoxy I had used for the temperature sensors. I checked my watch, then the unit, watch, unit,…and saw no LED pips.  Now that was a real cause for concern, but there was nothing for it at this point.  So we collected the old flow sensors, removed the anchors, and I set about constructing a new connection rig from the various pieces of PVC I had in the mesh bag by my side.

It looks more exciting in photos than it does in real life...

It looks more exciting in the photos than it does in real life…

A little extreme underwater plumbing, and an improvised extra support for the center of the rod (thanks to my old nemesis: vortex shedding) we had it installed.  We connected the one new sensor we had with us, and were somewhat surprised that it took almost 180 grams of ballast to make it neutral (?), then I remembered that I had lithium batteries in this unit.  High  power/mass ratios are not as advantageous as they might seem in underwater applications.  After returning to the surface, I cradled the Pearls as we drove the tanks back to Tulum, watching for any signs of life, but it was starting to look like all of the units had expired.  I was pretty unhappy about that, especially since C1 was a “Rosetta stone” build, with both a BMA180, and a BMA250 acclerometer inside. I planned to use that data to develop a transfer function that could merge the data from the different build generations.  Now it depended on how long that logger had operated before the epoxy let go. If water had entered the housing, there might not be any data at all. I was also cursing myself for putting an untested adhesive on the pressure logger, as that was our only reliable tide record for the site.

Pretty bummed out at this point...

Wanna see a maker cringe?  Show them this

Back at base, I had a chance to examine things more closely, and the news did not get any better. The new epoxy had degraded into a flakey, rubbery mess, and rust had devoured my temperature sensors. My only hope was that the plastic weld putty around the wires passing through the hull had provided some measure of protection in the shallow water.  Once we had photos of the damage, I started opening them up.

I was not expecting much, so I was pleasantly surprised to find that the loggers with the white epoxy had no water in the main housing. Both C1 and the pressure unit had small battery leaks, because the power module shorted out when salt water bridged the contacts, and alkalines usually pop if you drain them completely. The data files on the SD cards were intact, showing that C1 had two weeks worth of data, while the pressure sensor ran for a month before it lost power. I copied the files over to Trish, and moved on to other forensics.  As with the Beta units in the Akumal Bay, the RTC’s had lost between 30-40 seconds of time over the three month deployment.

The test rig in place

The parallel deployment rig after installation.

Then I turned to C2 and C4, which had been spared the bad epoxy. I had hoped for a full data set from at least one of them, but the log showed that they  barely squeaked into October before pulling their batteries below the 2.8v cutoff. That meant we now had a month long data gap for a system that we had been monitoring continuously since the first alpha units went in. The C2&4 units power curves were so spectacularly bad that I immediately restarted them on the meter, and discovered that both of their SD cards were terrible, with one of them drawing > 7mA while the logger slept. (That’s probably some kind of record, and I am temped to mail it to Bunnie, to see where it came from.)  And just to pour salt on the wound, the 7-8 month lifespan projections from the previous generation made me pretty bravo about power consumption back in August.  So I left the C’s running on a short 5 minute sampling interval, taking three times as many data points as we actually needed. Had I set them to a more pedestrian 15 minute sampling schedule, they might have pulled though. Arrrgh!

But in the end, we had something to work with, and that’s all we really need from these early builds. While I was grumbling about crap SD cards, and adhesives made from leftover chicken parts, Trish had been click-clicking away happily on her data.  She was in a much better mood than I was, so I asked her to cheer me up with a quick peak at some of the raw Z axis records out of C1.  In theory, the 14-bit/1g bma180 (in blue) should outperform the humbler 12-bit/2g bma250 (red) which I had used on the earlier builds:

StratifiedValuesforTwoacc_redisBMA250That 250 data is more stratified, but not nearly as much I was expecting, and the difference in signal magnitude is almost negligible.  Huh…perhaps that inter-generation data translation is not going to be as tough as I though.

By this point (2 am? ish?) my own batteries were running low, so we called it a day. Not a great day mind you, but sometimes that’s just how it goes.

<— Click here to continue reading the story—>

Field Report: 2014-12-11 Our First Real-World Drip Sensor Data!

Our deployment sites were within site of survey stations, so we could triangulate surface locations later.

Our sites were all close to survey stations, so we can work out the locations for top side sensor co-deployments later.

Trish retrieved the first generation of drip sensors from Rio Secreto yesterday, but we did not have time to open them up until this morning. So breakfast consisted of me concatenating the separate csv files, while she brought some substantial spreadsheet whammy to bear analyzing the data.  We were both pretty excited to find that the first generation of drip sensors had actually worked, and I can only imagine how our little science/nerd fest looked to the other folks at the Turtle Bay Cafe, as they nursed their Mexican holiday hangovers with caffeine.

Trish described how one of our installation sites had experienced a flooding event, which knocked over some of the the sensors. While she was re-assembling the scattered stations, she recorded another set of manual drip counts, so we could verify our numbers. Shortly after the deployment in August, I had made the unwelcome discovery that many of those early drip loggers had been deployed with fake SD cards, which use excessive amounts of power. But I had also installed lithium batteries to provide a bit more juice (~2900 mAh/cell), and I was happy to see that four of the original six units were still running. Whew!

I won’t bore you with the low level details, because I think the graphs speak for themselves. This drip count from is from DS01, binned to a 15 minute interval:Cave Pearl Drip Sensor DATA

Fernanda Lasas,  who kept an eye on our little loggers during this deployment, confirmed that there was indeed an event, with rainfall and a jump in the local water table, right around that first point of inflection, and that it was “pretty rainy” for more than a month after that. This resolved that question I had about the validity of data from DS03, which had failed early.  While there was considerable variability in drip counts from one stalagmite to the next (as predicted by my wife back in August…), we saw congruent trends in the different records from this chamber. With so few stations in this first deployment,  I am not sure what else one could say at this point.

The second chamber we instrumented had considerably lower drip rates, which was somewhat ironic considering that it was the one that flooded during the deployment. This record was from DS07 (shown in the photo above) with the count recorded every 15 min:DripSensor07_Data_RioSecreto_Aug-Dec_2014

While it doesn’t take a rocket scientist to see when the unit was knocked over, I have added a once-per-day x,y,z axis reading to the code, so we can be more certain that the sensors hold a fixed position on future deployments. You do see that spike reflected in the readings from the other chamber, although it was muted against the higher base rate. And it’s even good to see a long string of zeros in the data, as this tells me that my sensors are not generating false positives from internal noise. Accelerometers can be twitchy when you run them at high sensitivity.

The SD card debacle meant that most of these loggers burned through their batteries much faster than I would have liked. But after looking at the plots, I realized that we had a reasonably good natural experiment testing the capacity of the Energiser advanced lithium (EA91) batteries we used.  So in the spirit of “making lemonade”, I thought I would post the combined Battery Voltage (mV)  vs  Sleep Current (mA) results:

Battery Voltage vs Sleep CurrentWhile that’s not a Danish performance test, it shows me that the Cave Pearls really do need to get sleep currents below 0.3 mA if I’m going to get a year out of three AA’s. You do not see that distinct “shoulder & elbow” pattern (see DS05) in the discharge curve of alkaline batteries, which give you a nice gradual decline in voltage.  If a set of lithium cells is going to die while they are still at 4.4 volts, I need to raise the cut-off in the code to protect my data. Unit 3 did have a corrupt file on the SD card, so I probably lost a couple of weeks there by not intercepting the SD writes soon enough. I have reduced the number of records per file, which will put less information at risk from a lost file like this, though it means more work stitching the data together later.

It took us most of the day to sift through it all,  but in the evening I still had a time to fire up a few new units, and review the overnights from the ones I had started the day before. In the process I discovered a problem with the new humidity sensors I had cobbled together just before the trip, which were claiming that our room hit 200% RH sometime around noon. The raw data looked fine, so I suspected a problem with the floating point calculations on the Arduino; something I ran into with the temperature sensors. I locked them into a bag with some desiccant packages to see if forcing them into a low range over night would help me isolate the problem.  They were still running the code that had performed beautifully back home, but during those tests they not been exposed to such high humidity for significant amounts of time.

Still, after a banner day like this, I can’t get too torn up about it.  🙂

Addendum 2014-12-16

I am happy to report that all of these first generation drip sensors went down to ~0.33mA after I replaced the fake SD cards. And only one of those first gen units was built with Rocket Ultra, the rest were a variety of cheap Pro-Mini style clones from eBay.  I will leave the existing batteries in DS01, so that the next deployment gives me more of that discharge curve.

Addendum 2015-01-07

Another little gem lies in the record for DS04, which had a sleep current just under one milliamp.  That unit looks like it would have gone another couple of weeks before hitting the rapid fall-off at 4.4v , giving about 4 months of operation. Now 4(months) x 30(days) x 24 (hours) = 2880, which, more or less, is the number of milliamp hours you get from a single AA battery. But the drip loggers have three batteries, so my ‘live’ duty cycle power drain is roughly 2x that being consumed when the unit is sleeping. DS04 wrote about 10000 records to the SD card and with 64 records buffered to the 4k eeprom on the RTC, that translates to about 160 SD card write events. The cumulative drip count approached 11000. The drip-interrupt routine does not do very much so I think writing data to the eeprom & SD card are my biggest power users. The new generation units have 32k eeproms, and I have them set to buffer a range from 96 to 512 records to see if I can quantify that power use a bit better.

Addendum 2015-03-02

A friend put me onto a way of graphing battery capacity in a way I had never seen before: Ragone Plots. I am still wrapping my head around the idea, but the convergence of all those battery power curves seems to indicate that there is very little advantage to using lithium batteries in low power applications like data loggers unless you need to operate at very cold temperatures.

Addendum 2016-08-31

Just breezing over some of these older posts, and realized the mistake I made on 2015-01-07.  When you put batteries in series you add voltage, but not power, so 2880 mAh would have been the budget for the entire battery pack. Fortunately for me, the blog traffic was so low back then that no one pointed out the gaff.  I’m leaving it in, as it shows what kind of a learning curve I was on at that point, and the project continues to progress despite my many misunderstandings.

<— Click here to continue reading the story—>

Field Report 2014-12-10: Collecting Salt Water Pearls

We did not have scuba gear, so the fastest approach was simply to cut the anchors away.

Without scuba gear, the fastest approach was simply to cut the anchors. I wanted to inspect  those connectors anyway.

Our first day with boots on the ground, and we were quite keen to see what we had on the loggers that were deployed in back in August.  A fresh batch of data is always a great way to motivate yourself for fieldwork.  So Trish headed out to Rio Secreto to collect the 1st generation drip sensors, while I met up with Gabriel and Marco from CEA to see about retrieving the Beta flow sensors we put in Akumal bay back in August. As we waited for a boat to become available Gabriel showed me the fantastic records that they had kept, and the locations they had selected for the deployments of the other two flow meters. One had been placed in the shallower south side of bay on October 13th, and the third unit was deployed at the mouth of Yalku Lagoon on November 27th.

 

This was discovered on Aug, and the unit was re-installed on Nov 21st.

This pivot joint failure was discovered on Nov 7th, and the unit was re-installed on Nov 21st. (Photo courtesy: Centro Ecologico Akumal)

Opportunistic photos of the units every couple of weeks revealed that the constant roil of the surf had taken a toll: with both of loggers in the bay suffering failures on the anchor rods & pivot joints. I had designed the pearls for much gentler cave environments, so this was not unexpected. I was just thankful that the folks at CEA had been around to catch the problems while the “backup” bungee cords were still in place, or the loggers could have simply drifted away.  Sometimes all you get from the first deployment is an understanding of how to do the next one better, and patchy data is still 100% better than no data at all.  Of course, As I reviewed the photos with Gabriel, I could not help but wonder if the electronics had survived all that knocking about.

 

There was even a few small crabs crawling around on the surface.

A marine bio project if I ever saw one

Gabriel had other pressing business that day, so when the boat became available Marco and I set off to retrieve the flow meters.  At each site we did a quick check that the north orientation was still correct, and that there were no obvious signs of physical damage. It was a gorgeous day to be out, but the bright tropical sun made it impossible for me to determine if the leds were still piping.  The first unit in the bay looked great but the second unit (in much shallower water) had suffered an incredible amount of bio-accumulation in only two months. I had never seen this on a sensor in the caves and it made wonder it if would even be possible to deploy ambient light sensors on a reef without some kind of rigorous cleaning schedule. By mid afternoon we had all the babies on board, and were heading back to shore.

The Oring seats were still clear. I guess PVC tastes better than EPDM?

The 0-ring seats were still clear. I guess PVC tastes better to sea critters than EPDM?

I spent a couple of hours scraping the gunk off of the housings with isopropyl alcohol before I dared to break a seal. And it was tough going, even with a pot scrubber. During the cleaning I could see that the LED on unit three was lit, indicating that it had gone into some kind of error state. Unit 4 piped on schedule, but I saw no flash from Unit 5. After the cleaning, it still took a wrench and some colorful language to loosen those bolts.

Once they were open I had a chance to look at the data files.  All of them had saved at least 10,000 records, but unfortunately the data from Unit 3 consisted of the same four numbers, repeating over and over again.  Inspection revealed that the SCL line on the I2C bus was broken. This had terminated the internal communications, although the RTC interrupt continued to fire on schedule for at least a month before it got confused and reset itself. So the logger from the south of the bay did not give us anything useful.  Unit 4, the first to go in, was still running when I disconnected it and I was keen to how much power it had used in three months. (see: mV vs time in the graph below) These beta generation units were running some pretty hairy old code, and I knew they were probably pulling a few mA the whole time. I also had Unit4 on a five minute sample schedule, so it had saved almost twenty eight thousand records to the SD card:

B4_Battery

No surprises there, with another 2-3 months of operation before this unit powered down. But it is worth noting how much spread there is in the voltage reading. This generation of loggers sported a TinyDuino stack so I used the AVR’s internal 1.1 vref to monitor the battery, and I was not expecting to see so much variability with the bandgap voltage method (>70 mV of noise?). When I use a voltage divider to read Vbat on my other builds, the readings are much more stable. 

It will take me a while to chew the compass and accelerometer data into something useful but the temperature record really jumped out at me:

AkumalBay_unitB4_TempRecord

 *repairing the anchor rod failure left a two week data gap in Nov.

For almost two months the night-time lows stay above 28 C, with some of the highs reaching 31 degrees. And this sensor (DS18B20) is not on the surface, but down in the middle of the water column at about 3m depth, pretty close to that reef. I’m no biologist, but it seems to be getting a little toasty down there…

We had a little farmer tending the crop on unit3.

We had a little farmer tending the crop of algae that bloomed on Unit 3

Unit 5 was still running, although the LED ground line had been shaken loose, which I why I did not catch any pips. This build also had a 3.3v regulator one the power module, so I don’t have a battery voltage data to analyse. And finally, this unit did not go into the water till Nov 27th, so it’s flow data record is quite brief. However there was one other thing I could look at, before calling it a day: How much did those cheap eBay RTC’s I was using drift over the deployment?  I found a lag of about 30 seconds in the RTC on Unit4, and about 40 seconds had been dropped from Unit5. I probably caused some of that delay myself as I was not setting the clocks very carefully back then, but it is still gives me some indication that these RTC boards should be good for a year long deployment. Not bad for a board that only cost two bucks.

LoctiteYellowing

Beta Unit 4 has now been under water for almost 10 months. The JB marine weld & Loctite epoxy are starting to show their age, in fact if the units were not under water the whole time, I’d say they were suffering from UV exposure.  But I think they should still be water tight for a while, despite the fact that I exceeded any manufacturer specifications quite a while ago.  The plan is to keep these early builds in service till the housings finally fail, but I would like to lower that sleep current before I deploy these units are redeployed.  If memory serves, I never did get around to sleeping that bma250 in the Beta generation code (?)

<— Click here to continue reading the story—>

Project Update: Gearing up for field work 2014-11-26

Using Loctite E00-CL this time round.

Using Loctite E00CL this time round. The epoxy is weaker overall, but claims higher shear strength on PVC than E30-CL. The faster epoxy gets hotter, and contracts more, so there may be some risk of lifting the components. And the numbers in the individual data sheets do not exactly match those in Loctite’s own Plastics Bonding Guide, so who knows?

Another field trip is rapidly approaching, and I am scrambling to finish the bench tests before we have to stuff everything into a suitcase. The last three months have seen the project migrate away from the unregulated TinyDuinos as the heart of the data logging platform, to RocketScream based builds. Most of my sensors require a regulated 3.3 volt supply, and with only one MCP1700 voltage regulator in the mix, the Ultras have been delivering better sleep currents overall.  The MCP also gives me the ability to use lithium batteries in a pinch (who’s over-voltage would fry the unprotected processor on a TinyD board) , and it delivers up to 250mA if I end up with really power hungry sensors later on. Now that I have the same core logging platform in all the different Cave Pearl models, it is easier to shave down my code, as the compiles keep bumping up against the 32k limit for multi-sensor configurations like the pressure/temp/RH unit

But I have not forgotten how the TinyDuinos catapulted this project into viability back in 2013, and I am waiting to see if they release a generic I2C driver shield. Despite my rough handling of those early Tiny-based builds, most of them are still chugging along after months under water, a tribute to the quality of their build. I enjoy soldering my little bots together, but anything you have to do a hundred times begins to loose it’s luster.

Cave Pearl Flow Sensor

With new 32k EEproms in the mix, space on that logger platform is getting pretty tight and I have to trim the groove hub pcb to make more room.

Bench testing over the last few months has seen more sensor failures than I can throw a stick at, and I am sure that there are more to come if I keep using cheap eBay vendors.  The best overall diagnostic to identify good breakout boards continues to be shutdown mode current. If it’s on spec, and the board delivers a stable reading after wake-up, your golden.  Along the way, there have been so many little code tweaks I could not even begin to list them all. Some, like having the sensor reading LED pip change color to also indicate battery status, were effortless. But others, like determining the optimum number of times to use precious power up cycles to check that battery status, still have me scratching my head. We have more than 12 new loggers to deploy this time round, and I will be embedding plenty of little mini-experiments in the code to give me some empirical data for those questions.

You need at least a week of dry runs, as some sensors fail after a few days of operation.

You need at least a week of dry tests, as some sensors don’t fail till they have been running for a few days.

At this point I am focusing on micro amps, not milli amps, and the best drip sensor builds are coming in with sleep currents in below 15 μA (if I get all the sensors in to their low power modes and pin-power the RTC) That’s a heck of allot better than I was expecting for a few jumpers connecting off-the-shelf breakout boards. Even with the physical build coming together well, I still have a huge sensor calibration to-do list hanging over my head. But the tickets already bought, so that will have to wait till after the next set of field deployments. I also need to develop a new bench testing method that gives me the ability to discriminate how relatively subtle code changes affect a micro-power budget. Oscilloscopes seem to capture a time window that is too brief for the complex duty cycle of a data logger, and the power use ranges from a few μA of sleep current to many tens of mA for SD card writing during each cycle.

Hmmm…

<— Click here to continue reading the story—>