Published Papers
To be extended...
Current Proposals

Plant 360: Capturing a dataset of plant images for advanced 3D phenotyping

Currently, I have not published any official papers, mainly due to the fact that this project is still in its infancy. However, I can explain the current progress that is being taken towards our first paper publication.
At the time of writing, myself and other members of the computer vision Lab are working on capturing a variety of 360 degree views of a collection of different plants. The purpose of capturing this dataset of plant images is that these can be used for training different machine learning models; view synthesis is the most likely use of these images, however these images could also be used for other plant phenotyping exercises. At time of writing, there does not exist a public dataset of 360 views of specifically plants.
ur5 and grantry setup
Visual depiction of the gantry and UR5 robotic arm setup. The arm always points at the origin of the plant and has complete free movement in a 1m x 1m volume.

The camera used to capture the images will not only capture RGB visible light, but also light values from the infrared electromagnetic band. The reason for this is that infrared radiation is a very useful indication of different properties of a plant, such as the leaf health/osmosis rates. Hence, when it comes to training an agent to recognise optimal views of the plant, having infrared information could be critical for effective analysis. Furthermore, there has currently been little research into extending view synthesis models to predict information that is outside of the visible light band.
A UR5 robotic arm will be programmed and attached with the camera to navigate around the plant. Moreover, the arm will be suspended from a gantry that can transition the plant anywhere within a 1m x 1m area above the plant; this will solve any issues with the plant being unable to capture valid views due to self-intersection of its joints. At a series of calculated positions, the arm will capture the perspective of the plant from that position.
Condition Total
Species 5
Plants per species 10
Image views per plant 100
5000
Specifications for the number of images that will be captured.

Overall, this paper aims to include the following information: how the UR5 captured the image views, the benefits of using this dataset for training/testing, a demonstration of how this dataset can work on standard NeRF and how the UR5 gantry setup can be used to capture a dataset of other interesting objects.

Finally, this paper is a collaborative effort between myself and my other colleagues:
  • Dr Mike Pound: main advisor of my project and co-writer of the final paper.
  • Dr Darren Wells & Dr Jonathan Atkinson: bio-scientists who are assisting with collecting the plants and building the gantry.
  • Mr Simon Castle-Green: robotics expert and consultant for controlling the UR5 arm.

We currently aim to submit this paper for peer-review by the end of May.