Thermal imaging cameras help researchers
understand hummingbird energy usage
At the Loyola Marymount University for Urban Resilience (Los Angeles, CA, USA; cures.lmu.
edu), researchers are using thermal imaging cameras to better understand how hummingbirds
can use so much energy with so little rest.
By understanding the physiological
mechanisms hummingbirds use to cope
with extreme energy requirements and
limitations, researchers may be able to gain
insight into broader, human medical applications such as the necessity to reduce
oxygen and food consumption during
long-term space travel, according to FLIR
(Wilsonville, OR, USA; www.flir.com).
Hummingbirds need to maintain a
high metabolism since they use energy at
such extreme rates. Because of their small
size, the birds consume the caloric equivalent of 300 hamburgers in nectar each day. Unless
a female hummingbird is nesting, nightly temporary hibernation (Torpor) is vital to survival.
Torpor involves the drastic reduction of body temperature, and nesting hummingbirds are
unable to enter this state, as they must use their body temperature to look after their eggs.
With body temperature being the primary indicator of torpidity, the researchers decided
to use thermal imaging cameras to monitor the nests without disturbing them. The research
team used a Vue Pro R thermal drone camera, which enabled the researchers to place the
cameras very close to the birds in order to capture frequent, accurate, and non-contact temperature readings. Vue Pro R cameras feature uncooled VOx microbolometer thermal imagers in either 640 x 512 or 336 x 256 formats, and a 7. 5 – 13. 5 µm spectral range.
In addition to the Vue Pro R, the team placed a FLIR C2 handheld camera near the
nest for additional monitoring. The C2 camera features an 80 x 60 uncooled microbolometer thermal imager, as well as a 640 x 480 visible color camera, along with FLIR’s
MSX (multispectral dynamic imaging) image processing enhancement, which enhances
thermal images with visible light detail for extra perspective. The C2 features a spectral
range of 7. 5 – 14 µm.
By using these cameras, LMU researchers are now monitoring 26 nests daily to measure
the energy associated with female hummingbirds by thermally monitoring each of the nesting birds state of torpidity.
Facebook develops new technique for
accelerating deep learning for computer vision
Based on a collaboration between the Facebook (Menlo Park, CA, USA; www.face-
book.com) Artificial Intelligence Research
and Applied Machine Learning groups, a
new paper has been released that details how
Facebook researchers developed a new way
to train computer vision models that speeds
up the process of training an image classification model in a significant way.
Facebook explains in the paper that deep
learning techniques thrive with large neural
networks and datasets, but these tend to involve longer training times that may impede
research and development progress. Using
distributed synchronous stochastic gradient
descent (SGD) algorithms offered a potential solution to this problem by dividing SGD
minibatches over a pool of parallel workers,
but to make this method efficient, the per-worker workload must be large, which implies nontrivial growth in the SGD mini-batch size, according to Facebook.
In the paper, the researchers explained
that on the ImageNet dataset, large minibatches cause optimization difficulties, but
when these are addressed, the trained networks exhibited good generalization. The
time to train the ImageNet-1k dataset of
over 1. 2 million images would previously
take multiple days, but Facebook has found
a way to reduce this time to one hour, while
maintaining classification accuracy.
“They can say ‘OK, let’s start my day, start
one of my training runs, have a cup of coffee,
figure out how it did,’” Pieter Noordhuis, a
software engineer on Facebook’s Applied
Machine Learning team, told VentureBeat
(San Francisco, CA, USA; www.venturebeat.
com) .“And using the performance that [they]
get out of that, form a new hypothesis, run
a new experiment, and do that until the day
ends. And using that, [they] can probably do
six sequenced experiments in a day, whereas
otherwise that would set them back a week.”
Specifically, the researchers noted that
with a large minibatch size of 8192, using 256
GPUs, they trained ResNet-50 in one hour
while maintaining the same level of accuracy
as a 256 minibatch baseline. This was accom-
plished by using a linear scaling rule for ad-
justing learning rates as a function of mini-
batch size and developing a new warmup
scheme that overcomes optimization chal-
lenges early in training by gradually ramp-
ing up the learning rate from a small to large
value and the batch size continued on page 8