@sdallen Raw output is a single spectrum, spatially integrated over the image, and the image itself. Ideally, this would be analyzed locally on the phone to give a single number/answer for any particular application (material, deltaE color difference, etc).
We are currently working on a more advanced model where you can manually or automatically drag-select multiple areas of interest in the image to fully bridge the gap between point-style and hyperspectral. Then you only get data from what you want to see with averaging physically integrated.
What do you think about that? Is that what you were expecting?
Great to see you all coming in! Please feel free to comment below!
Retired prof, ME.
Thank you for sharing @sdallen :)
What does output look like?
@sdallen Raw output is a single spectrum, spatially integrated over the image, and the image itself. Ideally, this would be analyzed locally on the phone to give a single number/answer for any particular application (material, deltaE color difference, etc).
We are currently working on a more advanced model where you can manually or automatically drag-select multiple areas of interest in the image to fully bridge the gap between point-style and hyperspectral. Then you only get data from what you want to see with averaging physically integrated.
What do you think about that? Is that what you were expecting?
Here is an actual test image from our team showing RGB integration to illustrate the above @sdallen.
(Please disregard the intentional defocus to avoid aliasing on the monitor)