Scanning: (R)ock (P)aper (S)cissors
With the last dev blog update (from November, ugh!) we have showed some RasPiCam calibration in order to have better image quality during scans.
You also have seen a very early glimpse on the FABtotum UI, so you can have an idea of how both hardware and software coexist.
Today I want to talk a little more about how we are accomplishing the 3d-scanning functionality of the FABtotum Personal Fabricator, in a more specific way: how the different types of physical-to-digital acquisition works.
Rock Paper Scissors
Like any tool, there are certain situations where one tool is better than onother one, even if slightly different, but no tool is always better. That’s the case with the 3 main methods of digital acquisition the FABtotum personal fabricator is capable of.
Such difference is clearly visible in the software environment we are developing, where those 3 different functions have each one a different set of instructions and scripts.
R: Rotating platform laser-scanning:
This is the most common laser scanner you can see around. it works by recording a laser line projected on to a object fixed on on a rotating platform. the whole 360° rotation is usually devided in small increments, and each increment is recorded by a camera (the Raspicam in our case).
With some math wizardry each laser profile can be used to measure the position of each point of the scan in a cartesian plane.
The Python script responsible for this job is the r_scan.py script, and it gets controlled by the UI via PHP (and some Ajax) for basic setup purposes.
This means that the user can set some advanced params (or not, if they don’t want to) to improve scan quality or reduce scanning times.
without becoming too technical, the python script is called by the web UI via PHP:
python r_scan.py -s <num_of_slices> -i <ISO> -r<resolution> -p<postprocessing>
Where the -s param controls how many slices should the 360° scan be divided into, the -i param decides the ISO setting of the camera, -r the resolution and -p postprocessing onboard to improve precision (this last one is an internal flag).
What the script does is, basically, the following:
- take a picture
- rotate the object by 360/<num of slices> degrees
- apply postprocessing, including some nice tricks to cut processing times like using grayscale images and YUV brightness, using image subdomains instead of controlling all the whole image <resolution>.
- Do all the math and gets a finite number of points with X,Y,Z coordinates
- Save the points in a *.asc file in point cloud format.
- repeat the above for each fo the <num of slices> slices
take a look at this picture before you get too bored!
The resulting file is what is called the “point cloud data”, wich looks something like this:
3019.05497679,-1270.11929871,15900
2997.5912647,-1261.08949463,15910
2998.10752996,-1261.30668792,15920
3006.95848709,-1265.030294,15930
3001.80040247,-1262.86028289,15940
…
Wich is, translated, a list of X,Y,Z coordinates in floating point format separed by n (new line). The point cloud is composed by thousands of those.
This is not the only way of storing the points but we aren’t really using the cloud data itself here, but saving it for future use. At this point we can indeed do two things: download from the web UI (on your PC) the *.asc file and use the cloud data in a software like meshlab to create a mesh (a process called triangulation). the second one, still in development, is to get the cloud data and triangulate it on the Rasperry Pi using the Qhull library or similar.
this last one is more critical and part of the “to come” features of the FABtotum.
Both processes are a necessary step if we want to use the 3D geometry for other purposes later:
- 3d printing the geometry (slicing)
- Using the geometry for milling
- Use the geometry for other purposes
But that’s a topic for another discussion.
The limit of rotating platform laser Scanners is, however, that they require an angle between the camera and the laser line generator.
This potentially can create “shadows” where the line laser is not seen by the camera because a feature of the scanned object is in the camera’s line of sight.
Other major limit is that, with our hardware, is not really the best choice to scan very small objects due to the laser line width, and here is where the probe comes in (don’t be scared!).
P: Probing
The other method of scanning, works best for (locally) flat surfaces with small surface features. For example a coin, a carved surface.
Probing works in a entirely different way then the above method.
While it shares some common interface features (it’s piloted by the FABtotum Web UI in the same way) the so-called p_scan.py script uses a different logic alltogether.
- The user select what X/Y area to probe and the max_z (feature height) to probe
- The user choose if the object has to be probed on 3 or 4 axis.
- The script takes control and probe the selected area on 3 or 4 axis
- in case of features it corrects it own max_z to avoit crashing into the object (for slopes etc)
The probe itself is currently composed by a FSR (Force Sensitive Resistor) sensor connected to the probe’s arm. Any contact with the arm is registered by the sensor.
The probe itself is servo-assisted and is engaged (lowered from the head) only when needed. This setup may change in the future, but the principle remains the same.
In this case the Raspberry Pi takes full advantage of it’s own possibilities as the master board, with the capability of laying down a probing strategy during the probing itself.
The result of the probe touching the object (wich is a switch,basically) it’s an event (lets call it “contact”). When the event “contact” is registered, the UI issues a M114 comand on the FABtotum Arduino derivate, receiving the current position formatted like so (RepRap Marlin FW):
ok C: X:0.00 Y:0.00 Z:0.00 E:0.00
Of course in this case the E axis is our A axis in case of 4-axis probing.
OT: We have discussed to implement a “clone” of the E axis called “A” with separated step/dir config directly in the firmware, but for now we are doing well as it is.
The position is then stored in a the point cloud array.
The probe is retracted and the next point is probed until there are no points left.
At this moment we don’t have added parametric point density to increase precision on some parts of the object, but those things can be added later on.
At the end of the scan the whole 4-D array containing the points is dumped in a *.asc file, just like before.
From this point on there are no difference in how the cloud data should (and must) be processed.
Overall, the probe is limited in the sense that it’s a physical object and therefore it cannot go in places where it’s to big to fit or there are obstacles (this is particularly true in 4 axis probing). But the biggest problem here is the speed of the probing, wich it’s usually an order of magnitude slower than the laser scanner (a single camera shot in a rotating-platform laser scanner can identify up to 1920 3D-points in 0.7 seconds).
On the other hand the probe can be SO.MUCH. more precise, and can handle objects with materials that don’t behave well with the laser, like glass,metal, or your miniature disco-ball!
S: Sweeping laser scanning
The sweeping laser scanner is the last method that is implemented with the FABtotum’s hardware. It takes full advantage of FABtotum’s 4 axis movements.
At this moment the script is quite not finished yet, also because with a couple of tweaks We could add a comand-line parameter to the r_scan.py script and make it work for sweeping laser with the same principle (yes, math rocks!).
Objective of the sweeping laser scanner on 4 axis is to avoid completely holes and shadows.
It accomplish this making multiple sweeps of portions of an object, while rotating it on the 4th axis.
It isn’t better than the normal rotating laser scanner,because it can generate more shadows and noise than the holes it fixes due to the angle at wich the laser hits the surface.
For this reason it’s the least precise of the 3 methods but an excellent option to get a quick geometry or to fix holes.
the s_scan.py script works like this:
python s_scan.py -x<X> -x<X1> -d<degrees>
- sweep from <X> to <X1>
- (and it analyze the sweep, extracting the 3D-points)
- rotate the object by <degrees> degrees
- repeat until all 360 degrees have been covered.
Of course you (or the Web UI) can specify to have <degrees> set to 0, in wich case the scan is made by just sweeping the flat surface. and then stopping the result is a partial, flat scan where one side of the object is not closed and shadows persist.
Rock Paper Scissors // conclusions
In the end, there is no definitive method for low-cost 3d scanning, but with the FABtotum we hopefully added some choices, making it easier for different people to find the right tool in may situations, just like we hope to do with the introduction of a flexible hybrid fabrication device in the 3d-printer scene.
As for the UI itself, all these 3 methods are coded within the PHP program and the python script library and will be open source at release, so hopefully we’ll see developers having fun, costumize their experience and possibly improve the UI with more functionalities.
The plugin system should give enought freedom with implementing different scanning methods for some specific usages and redistribuiting it for other users. We know that there are people out there way smarter then us, and we are looking forward to learn and improve from them, too!
Marco,
FABteam