@sdallen Raw output is a single spectrum, spatially integrated over the image, and the image itself. Ideally, this would be analyzed locally on the phone to give a single number/answer for any particular application (material, deltaE color difference, etc).
We are currently working on a more advanced model where you can manually or automatically drag-select multiple areas of interest in the image to fully bridge the gap between point-style and hyperspectral. Then you only get data from what you want to see with averaging physically integrated.
What do you think about that? Is that what you were expecting?
What does output look like?
Retired prof, ME.
Great to see you all coming in! Please feel free to comment below!