Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

point cloud to mesh grid cell averaging #15

Open
3 tasks done
hcwinsemius opened this issue Apr 25, 2024 · 8 comments
Open
3 tasks done

point cloud to mesh grid cell averaging #15

hcwinsemius opened this issue Apr 25, 2024 · 8 comments
Assignees

Comments

@hcwinsemius
Copy link
Contributor

hcwinsemius commented Apr 25, 2024

@smathermather
Copy link
Contributor

The question is how to do this in a sufficiently performant way. The first thing I'll try is pdal's colorization filter which allows us to attribute from a gdal supported raster to any attribute in our point cloud:
https://pdal.io/en/2.7-maintenance/stages/filters.colorization.html#filters-colorization

In this way, we can use grid ID's to attribute to the point cloud, and then use that attribution as our tie-in to extract statistics from the point cloud based on those IDs.

@smathermather
Copy link
Contributor

Then we can use filter.stats to extract whichever statistic we are interested in using a where clause to isolate the grid cell in question.
https://pdal.io/en/2.7-maintenance/stages/filters.stats.html#filters-stats

@hcwinsemius
Copy link
Contributor Author

The question is how to do this in a sufficiently performant way. The first thing I'll try is pdal's colorization filter which allows us to attribute from a gdal supported raster to any attribute in our point cloud:
https://pdal.io/en/2.7-maintenance/stages/filters.colorization.html#filters-colorization

In this way, we can use grid ID's to attribute to the point cloud, and then use that attribution as our tie-in to extract statistics from the point cloud based on those IDs.

https://github.com/localdevices/ORProfile/blob/optimizer/orprofile%2Fapi%2Fdepth_sample.py does a similar resampling approach for the sonar samples. Perhaps components can be reused?

@smathermather
Copy link
Contributor

smathermather commented Jun 27, 2024

A bit of progress. I've got stuff I might want to do differently, but keeping it simple and proof of concept first.

As such, I'm just sub-sampling the dry bed point cloud first and writing to a csv, and then feeding that in. Once everything is working, I'll likely integrate this further as with PDAL's python API and/or skip writing of an intermediate file. But in the meantime, we have a json file to tell pdal what to do:

[
    "input.laz",
    {
        "type":"filters.sample",
        "radius":2
    },
    {
        "type":"writers.text",
        "format":"csv",
        "order":"X,Y,Z",
        "keep_unspecified":"false",
        "filename":"filtered.csv"
    }
]

And can invoke that as so:
pdal pipeline filter.json

Now we have a csv that we can pull in.

@hcwinsemius -- can you show me how you generate the grid for Bamboi? I need to generate that directly to fully test (rather than a geojson) so that it has all the appropriate properties. Here's my attempt in importing:

import geopandas as gpd
import pandas as pd
import numpy as np
import orprofile.api.mesh
import numpy as np

def regrid_dry(mesh, gdf, nodata=-9999, column="Depth"):
    """
    regrid a set of depth samples to the provided mesh object

    Parameters
    ----------
    mesh : orprofile.Mesh

    Returns
    -------

    """
    # get the index numbers of each mesh cell
    da_index = mesh._get_index_da()
    # get index number of each sample
    points = da_index.ugrid.sel_points(
        x=gdf.geometry.x,
        y=gdf.geometry.y
    )
    face_index = np.ones(len(gdf), dtype=np.int64) * nodata  # -9999 is the nodata value
    face_index[points.mesh2d_index] = points.values
    gdf["face_index"] = face_index

    depth_mean = gdf[gdf["face_index"] != nodata][[column, "face_index"]].groupby("face_index").mean()
    depth_grid = np.zeros(da_index.shape) * np.nan
    depth_grid[depth_mean.index] = depth_mean[column]
    ds = mesh._get_empty_ds()
    ds["samples"] = ("mesh2d_nFaces", depth_grid)
    return ds

df = pd.read_csv('filtered.csv')
gdf = gpd.GeoDataFrame(
    df, geometry=gpd.points_from_xy(df.X, df.Y), crs="EPSG:32630"
)
print(gdf.head())

mesh = gpd.read_file("mesh_example_v2.geojson")
print(mesh.head())

regrid_dry(mesh, gdf)

Which errors on out __getattr__:

python dry_sample.py 
           X          Y       Z                     geometry
0  606773.98  901452.56  120.49  POINT (606773.98 901452.56)
1  606775.50  901451.27  120.07   POINT (606775.5 901451.27)
2  606776.91  901449.80  119.82   POINT (606776.91 901449.8)
3  606785.67  901435.10  120.65   POINT (606785.67 901435.1)
4  606785.28  901433.41  121.65  POINT (606785.28 901433.41)
   mesh2d_nFaces        yi  ...  cols                                           geometry
0              0  0.000000  ...     0  POLYGON ((606783.628 901576.298, 606776.657 90...
1              1  2.050305  ...     0  POLYGON ((606784.203 901574.373, 606777.264 90...
2              2  4.097750  ...     0  POLYGON ((606784.778 901572.449, 606777.872 90...
3              3  6.141944  ...     0  POLYGON ((606785.352 901570.524, 606778.48 901...
4              4  8.182641  ...     0  POLYGON ((606785.927 901568.599, 606779.088 90...

[5 rows x 9 columns]
Traceback (most recent call last):
  File "/home/smathermather/Documents/OpenDroneMap/TAHMO/Bamboi/average/dry_sample.py", line 46, in <module>
    regrid_samples(mesh, gdf)
  File "/home/smathermather/Documents/OpenDroneMap/TAHMO/Bamboi/average/dry_sample.py", line 20, in regrid_samples
    da_index = mesh._get_index_da()
  File "/home/smathermather/Documents/OpenDroneMap/TAHMO/Bamboi/average/.venv/lib/python3.10/site-packages/pandas/core/generic.py", line 6299, in __getattr__
    return object.__getattribute__(self, name)
AttributeError: 'GeoDataFrame' object has no attribute '_get_index_da'

@smathermather
Copy link
Contributor

smathermather commented Sep 16, 2024

Linking into original python documenting point cloud reading, though the below might be a bit more pythonistic:
#10

Existing averaging should work fine, so I checked off the above. Feel free to uncheck if my assumptions are wrong. Probably the simplest way to get a point cloud into a numpy array is as follows:

import pdal
pipeline = pdal.Reader("https://crankyserver.com/api/projects/243/tasks/d6bb6fd8-902d-41fc-b969-87b6732a30ef/download/georeferenced_model.laz") | pdal.Filter.sample(radius="2")
for array in pipeline.iterator(chunk_size=500):
    print(len(array))

This has the added bonus of operating on controllable windowed subsets of the data. Access to the full dataset is also possible with something like the following:
full_array = np.concatenate(list(pipeline))

This is pretty minor changes to the documented API here on streamable pipelines:
https://pypi.org/project/pdal/

As before, the file can be local or remote. I have it pointed to crankyserver for this example, but a local file usually makes more sense.

@smathermather
Copy link
Contributor

smathermather commented Sep 17, 2024

The following writes a geojson file from a sampled point cloud such that it can be parsed into the appropriate cells:

import pdal
pipeline = pdal.Reader("https://crankyserver.com/api/projects/243/tasks/d6bb6fd8-902d-41fc-b969-87b6732a30ef/download/georeferenced_model.laz") | pdal.Filter.sample(radius="4")
print(pipeline.execute())

full_array = pipeline.arrays[0]

pipeline = pdal.Writer.text(
    filename="filtered.geojson",
    format="geojson",
    order="X,Y,Z",
).pipeline(full_array)

print(pipeline.execute())

Although there is a bug in pdal which isn't writing valid json. It's missing comma delimiters between objects.

@smathermather
Copy link
Contributor

image

@smathermather
Copy link
Contributor

smathermather commented Sep 17, 2024

Submitted the bug report upstream in PDAL:
PDAL/PDAL#4498

And a pull request now adds the dry bed data to tests:
#22

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants