Quantcast
Channel: canopy – rapidlasso GmbH
Viewing all 25 articles
Browse latest View live

Discriminating Vegetation from Buildings

$
0
0

I came across an interesting blog article by Jarlath O’Neil-Dunne from the University of Vermont on how LiDAR return information can be used as a simple way to discriminate vegetated areas from buildings. He first computes a normalized first-return DSM and a normalized last-return DSM that he subtracts from another to highlight the vegetation. He writes “This is because the height difference of the first and last returns for buildings is often identical, whereas for trees it is typically much greater.”

Side note: I am not entirely happy with the terminology of a “Normalized Digital Terrain Model (nDTM)”. Jarlath writes: “A similar approach is used to create a Normalized Digital Terrain Model (nDTM).  A DTM is generated from the last returns. The DEM is then subtracted from the DTM to create the nDTM.” I like to reserve the term “Digital Terrain Model (DTM)” for bare-earth terrain computed from returns classified as ground.

Below I radically simplify Jarlath workflow by eliminating the two normalization steps. This not only saves the creation of 3 temporary rasters but also removes the requirement to have ground-classified LiDAR:

  1. Create a first-return frDSM
  2. Create a last-return lrDSM
  3. Subtract the lrDSM from the frDSM to get a return-difference rdDEM

This rdDEM has non-zero heights in all areas where the LiDAR produced more than one return. This happens most often and most pronounced in vegetated areas. Here is how to implement this with las2dem of LAStools:

las2dem -i ..\data\fusa.laz -first_only -o frDSM.bil
las2dem -i ..\data\fusa.laz -last_only -o lrDSM.bil
lasdiff -i frDSM.bil -i lrDSM.bil -o rdDEM.laz
lasview -i rdDEM.laz
rdDEM

The return-difference rdDEM shows the height difference between first and last returns.

Does this work well for you? The results on the “fusa.laz” data set are not entirely convincing … maybe because the vegetation was too dense (leaf-on?) so that the LiDAR penetration is not as pronounced. You can switch back and forth between the first-return and the last-return DSM by loading both *.bil files into lasview with the ‘-files_are_flightlines’ option and then press hotkeys ‘0’ and ‘1’ to toggle between the points and ‘t’ to triangulate the selected DSM.

lasview -i frDSM.bil lrDSM.bil -files_are_flightlines
first-return DSM

first-return DSM

last-return DSM

last-return DSM

We should point out that for Jarlath the return difference raster rdDEM is just one part of the pipeline that is followed by an object-based approach in which they integrate the spectral information from aerial imagery and then use iterative expert systems to further improve the tree canopy classification.

Nevertheless, we believe that our way of classifying vegetation and buildings via a pipeline of lasground, lasheight, and lasclassify gives a better and more robust initial guess than multi-return height differences towards what is vegetation and what are buildings. Below you see this is implemented using the new LASlayers concept:

lasground -i ..\data\fusa.laz -city -extra_fine -olay
lasheight  -i ..\data\fusa.laz -ilay -olay
lasclassify -i ..\data\fusa.laz -ilay -olay 
lasview -i ..\data\fusa.laz -ilay
Automated building and vegetation classification with lasclassify.

Automated building and vegetation classification with lasclassify.

Using lasgrid there are many ways that can easily turn the classified point cloud into a raster so that it can be used for subsequent exploitation together with other image data using a raster processing software. An example is shown below.

lasgrid -i ..\data\fusa.laz -ilay -keep_class 5 ^
        -step 0.5 -subcircle 0.1 -occupancy -fill 1 -false ^
        -use_bb -o vegetation.tif
lasgrid -i ..\data\fusa.laz -ilay -keep_class 6 ^
        -step 0.5 -subcircle 0.1 -occupancy -fill 1 -gray ^
        -use_bb -o buildings.tif
gdalwarp vegetation.tif buildings.tif classified.tif

classified

Alternatively we can use lasboundary to create a shapefile describing either the vegetation or the buildings.

lasboundary -i ..\data\fusa.laz -ilay -keep_class 5 ^
            -disjoint -concavity 1.5 -o vegetation.shp
lasboundary -i ..\data\fusa.laz -ilay -keep_class 6 ^
            -disjoint -concavity 1.5 -o buildings.shp
SHP file generated with lasboundary with polygons describing the vegetation.

SHP file generated with lasboundary with polygons describing the vegetation.

SHP file generated with lasboundary with polygons describing the buildings.

SHP file generated with lasboundary with polygons describing the buildings.



Rasterizing Perfect Canopy Height Models from LiDAR

$
0
0

In literature you sometimes read “we generated a Canopy Height Models (CHM) and then did this and that” without the process that was used to create the CHM being described in detail. One approach computes the CHM as a difference between DSM and DTM: create a DTM from the ground returns and a DSM from the first returns and subtract the two rasters. Also here is still a question left to be answered: how exactly are the DTM and the DSM generated. A different approach computes the CHM directly from height-normalized LiDAR points. And again there are many ways of doing so and we want to look at the possibilities in more detail.

the 100 by 100 meter sample plot 'drawno.laz'

the 100 by 100 meter sample plot ‘drawno.laz’

In the following we demonstrate different alternatives for CHM generation on a 100 by 100 meter sample LiDAR tile ‘drawno.laz‘ from a forest near Drawno in Poland (that you can download here), slowly converging towards the CHM generation method that we recommend using. We start with ground classifying the LiDAR using lasground:

lasground -i drawno.laz ^
          -wilderness ^
          -o ground.laz

Then we height-normalize the LiDAR using lasheight. As we know that there are no trees higher than 28 meters in this plot we drop all LiDAR points that are higher than 30 meters that may be bird hits or other noise.

lasheight -i ground.laz ^
          -drop_above 30 ^
          -replace_z ^
          -o normalized.laz

As the sample plot has an average pulse spacing of around 0.3 meters we decide to use a step size of 0.33333 meters to create a 300 by 300 pixel raster. We produce a false color visualization instead of a height raster for our CHMs so we can include the results here. The simplest method uses lasgrid with option ‘-highest’ that uses for each pixel the highest z coordinate among all LiDAR returns falling into the corresponding 0.33333 by 0.33333 meter area.

lasgrid -i normalized.laz ^
        -step 0.33333 ^
        -highest ^
        -false -set_min_max 0 25 ^
        -ll 278200 602200 -ncols 300 -nrows 300 ^
        -o chm_grd.png
Gridding the highest point that falls into each 0.33333 meter by 0.33333 meter cell.

Gridding the highest point that falls into each 0.33333 meter by 0.33333 meter cell.

The resulting CHM (shown at a 200 % zoom) is full of empty pixels and so called “pits” that will hamper subsequent analysis for single tree detection, height and crown diameter computation, and the like. A simple improvement can be obtained by replacing each LiDAR return with a small disk. After all, the laser beam has – depending on the flying height – a diameter of 10 to 50 centimeter and approximating this with a single point of zero area seems overly conservative. In lasgrid there is the option ‘-subcircle 0.1′, which replaces each return by a circle with a radius of 10 centimeter or a diameter of 20 centimeter.

lasgrid -i normalized.laz ^
        -subcircle 0.1 ^
        -step 0.33333 ^
        -highest ^
        -false -set_min_max 0 25 ^
        -ll 278200 602200 -ncols 300 -nrows 300 ^
        -o chm_grd_d20.png
Gridding the highest z value after turning each point into a circle with 20 cm diameter.

Gridding the highest z value after turning each point into a circle with 20 cm diameter.

The resulting CHM (shown again at a 200 % zoom) is much improved but there are still empty pixels and “pits”. We could simply widen the circles further with ‘-subcircle 0.15′, ‘-subcircle 0.2′, or ‘-subcircle 0.25′. As you can see below this produces increasingly smooth CHMs with widening tree crowns. But this “splats” the LiDAR returns into circles that are growing larger and larger than the laser beam diameter and thus have less and less in common with reality.

Gridding after turning each point into a circle with 30 cm diameter. Gridding after turning each point into a circle with 40 cm diameter. Gridding after turning each point into a circle with 50 cm diameter.

Gridding the highest returns will often leave empty pixels in the data even when “splatting” the points. Another popular approach avoids this by interpolating all first returns with a triangulated irregular network (TIN) and then rasterizing it onto a grid to create the CHM. This can be implemented with las2dem as shown below:

las2dem -i normalized.laz ^
        -first_only ^
        -step 0.33333 ^
        -false -set_min_max 0 25 ^
        -ll 278200 602200 -ncols 300 -nrows 300 ^
        -o chm_tin.png
Rasterizing the TIN that interpolates all first returns onto a  0.33333 meter grid.

Rasterizing the TIN that interpolates all first returns onto a 0.33333 meter grid.

The result has no more empty pixels but is full of pits because many laser pulses manage to deeply penetrate the canopy before producing the first return. When combining multiple flight lines some laser pulses may have an unobstructed view of the ground under the canopy without hitting any branches. These “pits” and how to avoid them is discussed in great length in the September 2014 edition of the ASPRS PE&RS journal by a paper of Khosravipour et al. We build upon these ideas in the following. But first we combine the ‘-highest’ gridding with TIN interpolation with a two step approach: (1) keep only one highest return per grid cell with lasthin (2) interpolate all these highest returns with las2dem.

lasthin -i normalized.laz ^
        -step 0.33333 ^
        -highest ^
        -o temp.laz
las2dem -i temp.laz ^
        -step 0.33333 ^
        -false -set_min_max 0 25 ^
        -ll 278200 602200 -ncols 300 -nrows 300 ^
        -o chm_tin_his.png
Rasterizing the TIN that interpolates only the highest points falling into each  0.33333 meter by 0.33333 meter grid cell.

Rasterizing the TIN that interpolates only the highest points falling into each 0.33333 meter by 0.33333 meter grid cell.

Next we integrate the idea of “splatting” the points into circles with a diameter of 20 centimeter to account for the laser beam diameter by adding option ‘-subcircle 0.1′ to lasthin.

lasthin -i normalized.laz ^
        -subcircle 0.1 ^
        -step 0.33333 ^
        -highest ^
        -o temp.laz
las2dem -i temp.laz ^
        -step 0.33333 ^
        -false -set_min_max 0 25 ^
        -ll 278200 602200 -ncols 300 -nrows 300 ^
        -o chm_tin_his_d20.png
Rasterizing the TIN that interpolates only the highest points of a 0.33333 meter grid  after first splatting points into circles with 20 cm in diameter.

Rasterizing the TIN that interpolates only the highest points of a 0.33333 meter grid after first splatting points into circles with 20 cm in diameter.

The results are much nicer but there are still pits. However, one may argue at this points that thinning with a step size of 0.33333 is too agressive and removes too many points for subsequent interpolation and rasterization with the exact same step size. So we double the resolution of the temporary point cloud by thinning with half the step size of 0.16667.

lasthin -i normalized.laz ^
        -subcircle 0.1 ^
        -step 0.16667 ^
        -highest ^
        -o temp.laz
las2dem -i temp.laz ^
        -step 0.33333 ^
        -false -set_min_max 0 25 ^
        -ll 278200 602200 -ncols 300 -nrows 300 ^
        -o chm_tin_hhs_d20.png
Rasterizing the TIN that interpolates only the highest points of a 0.16667 meter grid  after first splatting points into circles with 20 cm in diameter onto a raster with step size 0.33333.

Rasterizing the TIN that interpolates only the highest points of a 0.16667 meter grid after first splatting points into circles with 20 cm in diameter onto a raster with step size 0.33333.

We now have more detail but also many more pits. Furthermore the interpolation of highest returns makes a big error across areas were we do not have any LiDAR returns that are flanked by canopy returns on both sides. This happens in the area circled where wrong canopy is created because the TIN is interpolating canopy returns across a small wet area without any returns, We show now how the pit-free method of Khosravipour et al. can be used to generate the perfect CHM:

rmdir tmp_chm /s /q
mkdir tmp_chm
las2dem -i normalized.laz ^
        -drop_z_above 0.1 ^
        -step 0.33333 ^
        -ll 278200 602200 -ncols 300 -nrows 300 ^
        -o tmp_chm/chm_ground.bil
lasthin -i normalized.laz ^
        -subcircle 0.1 ^
        -step 0.16667 ^
        -highest ^
        -o temp.laz
las2dem -i temp.laz ^
        -step 0.33333 -kill 1.0 ^
        -ll 278200 602200 -ncols 300 -nrows 300 ^
        -o tmp_chm/chm_00.bil
las2dem -i temp.laz ^
        -drop_z_below 2 ^
        -step 0.33333 -kill 1.0 ^
        -ll 278200 602200 -ncols 300 -nrows 300 ^
        -o tmp_chm/chm_02.bil
las2dem -i temp.laz ^
        -drop_z_below 5 ^
        -step 0.33333 -kill 1.0 ^
        -ll 278200 602200 -ncols 300 -nrows 300 ^
        -o tmp_chm/chm_05.bil
las2dem -i temp.laz ^
        -drop_z_below 10 ^
        -step 0.33333 -kill 1.0 ^
        -ll 278200 602200 -ncols 300 -nrows 300 ^
        -o tmp_chm/chm_10.bil
las2dem -i temp.laz ^
        -drop_z_below 15 ^
        -step 0.33333 -kill 1.0 ^
        -ll 278200 602200 -ncols 300 -nrows 300 ^
        -o tmp_chm/chm_15.bil
las2dem -i temp.laz ^
        -drop_z_below 20 ^
        -step 0.33333 -kill 1.0 ^
        -ll 278200 602200 -ncols 300 -nrows 300 ^
        -o tmp_chm/chm_20.bil
las2dem -i temp.laz ^
        -drop_z_below 25 ^
        -step 0.33333 -kill 1.0 ^
        -ll 278200 602200 -ncols 300 -nrows 300 ^
        -o tmp_chm/chm_25.bil
lasgrid -i tmp_chm/chm*.bil -merged ^
        -step 0.33333 ^
        -highest ^
        -false -set_min_max 0 25 ^
        -ll 278200 602200 -ncols 300 -nrows 300 ^
        -o chm_pit_free_d20.png
rmdir tmp_chm /s /q
Running the pit-free algorithm on the highest LiDAR returns in a 0.16667 meter grid (after splatting them to circles 20 cm in diameter) and producing a 0.33333 meter raster CHM.

Running the pit-free algorithm on the highest LiDAR returns in a 0.16667 meter grid (after splatting them to circles 20 cm in diameter) and producing a 0.33333 meter raster CHM.

With such a perfectly pit-free output we can now be even more conservative and lower the diameter of the circles that we replace each LiDAR return with from 20 cm to 10 cm by replacing ‘-subcircle 0.1′ with ‘-subcircle 0.05′ in the above script.

Running the pit-free algorithm on the highest LiDAR returns in a 0.16667 meter grid (after splatting them to circles only 10 cm in diameter) and producing a 0.33333 meter raster CHM.

Running the pit-free algorithm on the highest LiDAR returns in a 0.16667 meter grid (after splatting them to circles only 10 cm in diameter) and producing a 0.33333 meter raster CHM.

Compared to the original pit-free algorithm published by Khosravipour et al. there are two minor differences: (1) instead of using all first returns as input we use one highest returns per grid cell after splatting all returns as small circles (instead of area-less points) onto a grid that has twice the output resolution. (2) we also have a ‘-kill 1.0′ threshold to also generate a partial CHM from all points for ‘chm_00.bil’ and add a new ‘chm_ground.bil’ to fill the potential holes. This prevents that higher up canopy returns are wrongly connected across water bodies where there are no LiDAR returns at all.

Reference:
Khosravipour, A., Skidmore, A.K., Isenburg, M., Wang, T.J., Hussin, Y.A., 2014. Generating pit-free Canopy Height Models from Airborne LiDAR. PE&RS = Photogrammetric Engineering and Remote Sensing 80, 863-872.


Two ASPRS awards for “pit-free” CHM algorithm

$
0
0
PRESS RELEASE (for immediate release)
July 29, 2015
rapidlasso GmbH, Gilching, Germany

The paper “Generating Pit-free Canopy Height Models from Airborne LiDAR” co-authored by rapidlasso GmbH and published in the September 2014 issue of PE&RS (the journal of the ASPRS) was awarded twice at the IGTF 2015 – ASPRS Annual Conference in Tampa, Florida last May. The paper took home the John I. Davidson President’s Award for Practical Papers (2nd Place) as well as the Talbert Abrams Award (2nd Honorable Mention).

The John I. Davidson President’s Award for Practical Papers (2nd Place).

The “pit-free” CHM paper wins the John I. Davidson President’s Award for Practical Papers (2nd Place) and the Talbert Abrams Award (Second Honorable Mention).

The “pit-free” CHM paper is joint work with Anahita Khosravipour, Andrew K. Skidmore, Tiejun Wang, and Yousif A. Hussin of ITC and University of Twente. It describes a technique that can create raster Canopy Height Models (CHMs) without the so called “pits” that tend to hamper subsequent extraction of individual tree attributes such as number, location, height, and crown diameter. The paper uses data measured in the field by ITC researchers to show that “pit-free” CHMs significantly lower the commission and omission errors in single tree detection.

Side-by-side comparison of a "standard" CHM and a "pit-free" CHM.

Visual side-by-side comparison of a “standard” versus a “pit-free” CHM.

The “pit-free” CHM algorithm can easily be implemented with LAStools either by modifying an available batch script or by executing the LAStools Pipelines distributed with the toolboxes for ArcGIS and QGIS. A detailed blog article that compares various different methods for creating CHMs is available via the Web pages of rapidlasso GmbH.

We at rapidlasso GmbH like to especially congratulate the main author, Ms. Anahita Khosravipour, who managed to get two awards with her very first academic publication. Those who like our “pit-free” CHM algorithm will probably also love the new technique that our team will introduce later this year at SilviLaser 2015 in France.

About rapidlasso GmbH:
Technology powerhouse rapidlasso GmbH specializes in efficient LiDAR processing tools that are widely known for their high productivity. They combine robust algorithms with efficient I/O and clever memory management to achieve high throughput for data sets containing billions of points. The company’s flagship product – the LAStools software suite – has deep market penetration and is heavily used in industry, government agencies, research labs, and educational institutions. Visit http://rapidlasso.com for more information.


Rapidlasso receives “Green Asia Award” at ACRS 2015

$
0
0
PRESS RELEASE (for immediate release)
November 16, 2015
rapidlasso GmbH, Gilching, Germany

At the Asian Conference on Remote Sensing 2015 (ACRS 2015) held in Manila, rapidlasso GmbH was honored with the “Green Asia Award” by the Chinese Society of Photogrammetry and Remote Sensing (CSPRS). This award is given to a paper that directs Asia towards a greener future using remote sensing technology. This year’s award commends rapidlasso GmbH on advancing the area of LiDAR processing through their PulseWaves effort. PulseWaves is a vendor-neutral full waveform LiDAR data exchange format and API that simplifies access to full waveform data and allows researchers to focus on algorithms and share results. In the future this technology may prove valuable to improve biomass estimates for carbon credit programs such as the TREEMAPS project of WWF.

Prof. Kohei Cho and Prof. Peter T. Y. Shih present the award

Prof. Kohei Cho and Prof. Peter T. Y. Shih present the Green Asia Award

The society communicated to Dr. Martin Isenburg, CEO of rapidlasso GmbH, that this award was also meant to honor his many years of teaching and capacity building across the Asian region. Since the beginning of 2013 rapidlasso GmbH has conducted well over 50 seminars, training events, and hands-on workshops at universities, research institutes, and government agencies in Thailand, Malaysia, Myanmar, Vietnam, Indonesia, Singapore, Taiwan, Japan, and the Philippines. The on-going LiDAR teaching efforts of rapidlasso GmbH in Asia and elsewhere can be followed via their event page.

Green Asia Award for CEO of rapidlasso GmbH

Green Asia Award given to the CEO of rapidlasso GmbH

The award certificate that was presented to Dr. Martin Isenburg by Prof Kohei Cho and Prof Peter Shih during the closing ceremony of ACRS 2015 came with a cash reward of USD 300. The award money was donated to the ISPRS summer school that followed the ACRS conference to top off the pre-existing “green sponsorship” by rapidlasso GmbH that was already supporting a “green catering” of summer school lunches and dinners to avoid single-use cups, plastic cutlery and styrofoam containers. The additional award money was used for hosting the main summer school dinner at a sustainable family-run restaurant serving “happy chickens” and “happy pigs” raised organically on a local farm.

during the closing ceremony of ACRS 2015

Award Ceremony held during Closing of ACRS 2015

About rapidlasso GmbH:
Technology powerhouse rapidlasso GmbH specializes in efficient LiDAR processing tools that are widely known for their high productivity. They combine robust algorithms with efficient I/O and clever memory management to achieve high throughput for data sets containing billions of points. The company’s flagship product – the LAStools software suite – has deep market penetration and is heavily used in industry, government agencies, research labs, and educational institutions. Visit http://rapidlasso.com for more information.


LASmoons: Andreas Konring and Susanne Bjerg Petersen

$
0
0

Andreas Konring and Susanne Bjerg Petersen (recipients of three LASmoons)
Department of Environmental Engineering
Technical University of Denmark, Lyngby, DENMARK

Background:
Copenhagen has in the recent years experienced severe floodings due to cloudbursts which has increased the focus of climate adaption and the implementation of green infrastructure. The use of sustainable urban drainage system (SUDS) solutions to divert stormwater from the existing drainage system will be a central measure to increase the climate resilience while greenifying the city and Copenhagen municipality is investing 700 million euros in SUDS projects alone. Additionally, the city has decided to plant 100.000 new trees in the next 10 years as another measure to enhance natural amenities but also because of air cleansing and cooling effects. However, it has not been investigated what effect the current canopy cover has on the rainwater retention due to increased evaporation and soil infiltration and if planting more trees could help improve the pluvial flooding issues.

Example of a pit-free CHM in an urban environment.

Example of a pit-free CHM in an urban environment.

Goal:
This study aims to estimate the current number of trees and extract tree metrics such as volume, canopy cover and densities with the use of the national LIDAR dataset and NIR ortophotos from summer and spring. These canopy metrics will be used to inform a simple tree model which will be implemented in a 2-D overland flow model to assess the effect of trees on flood mitigation. The created CHM could also be used in further analysis of the urban heat island effect.

Data:
+
100 square kilometers of the Danish national LiDAR dataset collected in November 2014 covering the municipality of Copenhagen.
+  density of 4 – 5 last-returns per square meter
+  classified into surface (1), ground (2), vegetation (3,4,5), buildings (6), noise (7) and water (9).

LAStools processing:
1)
create square tiles with buffer to avoid edge artifacts [lastile]
2) generate DTMs and DSMs with only buildings and terrain [las2dem]
3).normalize height, remove outliers and keep classes 2, 5 and 6 [lasheight]
4) create rasters with forest metrics [lascanopy]
5) calculate the pit-free Canopy Height Model (CHM) proposed by Khosravipour et al. (2014) [lasthin, las2dem, lasgrid]

Reference:
Copenhagen Municipality, 2011. Copenhagen Climate Adaption Plan.
Geodatastyrelsen, 2014. Danmarks højdemodel, DHM/Punktsky – Dataversion 2.0 januar 2015. Product specification.
Khosravipour, A., Skidmore, A.K., Isenburg, M., Wang, T.J., Hussin, Y.A., 2014. Generating pit-free Canopy Height Models from Airborne LiDAR. PE&RS = Photogrammetric Engineering and Remote Sensing 80, 863-872.


Generating Spike-Free Digital Surface Models from LiDAR

$
0
0

A Digital Surface Model (DSM) represents the elevation of the landscape including all vegetation and man-made objects. An easy way to generate a DSM raster from LiDAR is to use the highest elevation value from all points falling into each grid cell. However, this “binning” approach only works when then the resolution of the LiDAR is higher than the resolution of the raster. Only then sufficiently many LiDAR points fall into each raster cell to prevent “empty pixels” and “data pits” from forming. For example, given LiDAR with an average pulse spacing of 0.5 meters one can easily generate a 2.5 meter DSM raster with simple “binning”. But to generate a 0.5 meter DSM raster we need to use an “interpolation” method.

Returns of four fightlines on two trees.

Laser pulses and discrete returns of four fightlines.

For the past twenty or so years, GIS textbooks and LiDAR tutorials have recommened to use only the first returns to construct the interpolating surface for DSM generation. The intuition is that the first return is the highest return for an airborne survey where the laser beams come (more or less) from above. Hence, an interpolating surface of all first returns is constructed – usually based on a 2D Delaunay triangulation – and the resulting Triangular Irregular Network (TIN) is rasterized onto a grid at a user-specified resolution to create the DSM raster. The same way a Canopy Height Model (CHM) is generated except that elevations are height-normalized either before or after the rasterization step. However, using a first-return interpolation for DSM/CHM generation has two critical drawbacks:

(1) Using only first returns means not all LiDAR information is used and some detail is missing. This is particularly the case for off-nadir scan angles in traditional airborne surveys. It becomes more pronounced with new scanning systems such as UAV or hand-held LiDAR where laser beams no longer come “from above”. Furthermore, in the event of clouds or high noise the first returns are often removed and the remaining returns are not renumbered. Hence, any laser shot whose first return reflects from a cloud or a bird does not contribute its highest landscape hit to the DSM or CHM.

(2) Using all first returns practically guarantees the formation of needle-shaped triangles in vegetated areas and along building roofs that appear as spikes in the TIN. This is because at off-nadir scan angles first returns are often generated far below other first returns as shown in the illustration above. The resulting spikes turn into “data pits” in the corresponding raster that not only look ugly but impact the utility of the DSM or CHM in subsequent analysis, for example, in forestry applications when attempting to extract individual trees.

interpolating all first returns interpolating all relevant returns

In the following we present results and command-line examples for the new “spike-free” algorithm by (Khosravipour et. al, 2015, 2016) that is implemented (as a slow prototype) in the current LAStools release. This completely novel method for DSM generation triangulates all relevant LiDAR returns using Contrained Delaunay algorithm. This constructs a “spike-free” TIN that is in turn rasterized into “pit-free” DSM or CHM. This work is both a generalization and an improvement of our previous result of pit-free CHM generation.

We now compare our “spike-free” DSM to a “first-return” DSM on the two small urban data sets “france.laz” and “zurich.laz” distributed with LAStools. Using lasinfo with options ‘-last_only’ and ‘-cd’ we determine that the average pulse spacing is around 0.33 meter for “france.laz” and 0.15 meter for “zurich.laz”. We decide to create a hillshaded 0.25 meter DSM for “france.laz” and a 0.15 meter DSM for “zurich.laz” with the command-lines shown below.

las2dem -i ..\data\france.laz ^
        -keep_first ^
        -step 0.25 ^
        -hillshade ^
        -o france_fr.png
las2dem -i ..\data\france.laz ^
        -spike_free 0.9 ^
        -step 0.25 ^
        -hillshade ^
        -o france_sf.png
hillshaded first-return DSM  "france_fr.png" hillshaded spike-free DSM "france_sf.png"
las2dem -i ..\data\zurich.laz ^
           -keep_first ^
           -step 0.15 ^
           -hillshade ^
           -o zurich_fr.png
las2dem -i ..\data\zurich.laz ^
        -spike_free 0.5 ^
        -step 0.15 ^
        -hillshade ^
        -o zurich_sf.png
hillshaded first-return DSM  "zurich_fr.png" hillshaded spike-free DSM "zurich_sf.png"

The differences between a first-return DSM and a spike-free DSM are most drastic along building roofs and in vegetated areas. To inspect in more detail the differences between a first-return and our spike-free TIN we use lasview that allows to iteratively visualize the construction process of a spike-free TIN.

lasview -i ..\data\france.laz -spike_free 0.9

Pressing <f> and <t> constructs the first-return TIN. Pressing <SHIFT> + <t> destroys the first-return TIN. Pressing <SHIFT> + <y> constructs the spike-free TIN. Pressing <y> once destroys the spike-free TIN. Pressing <y> many times iteratively constructs the spike-free TIN.

first-return TIN of france.laz spike-free TIN of france.laz first-return TIN of france.laz spike-free TIN of france.laz first-return TIN of france.laz spike-free TIN of france.laz

One crucial piece of information is still missing. What value should you use as the freeze constraint of the spike-free algorithm that we set to 0.9 for “france.laz” and to 0.5 for “zurich.laz” as the argument to the command-line option ‘-spike_free’. The optimal value is related to the expected edge-length and we found the 99th percentile of a histogram of edge lengths of the last-return TIN to be useful. Or simpler … try a value that is about three times the average pulse spacing.

first-return TIN of zurich.laz spike-free TIN of zurich.laz

References:
Khosravipour, A., Skidmore, A.K., Isenburg, M. and Wang, T.J. (2015) Development of an algorithm to generate pit-free Digital Surface Models from LiDAR, Proceedings of SilviLaser 2015, pp. 247-249, September 2015.
Khosravipour, A., Skidmore, A.K., Isenburg, M (2016) Generating spike-free Digital Surface Models using raw LiDAR point clouds: a new approach for forestry applications, (journal manuscript under review).


LASmoons: Alen Berta

$
0
0

Alen Berta (recipient of three LASmoons)
Department of Terrestrial Ecosystems and Landscape, Faculty of Forestry
University of Zagreb and Oikon Ltd Institute for Applied Ecology, CROATIA

Background:
After becoming the EU member state, Croatia is obliged to fulfill the obligation risen from the Kyoto protocol: National Inventory Report (NIR) of the Green House Gasses according to UNFCCC. One of the most important things during the creation of the NIR is to know how many forested areas there are and their wood stock and increment. This is needed to calculate the size of the existing carbon pool and its potential for sequestration. Since in Croatia, according to legislative, it is not mandatory to calculate the wood stock and yield of the degraded forest areas (shrubbery and thickets) during the creation of the usual forest management plans, this data is missing. So far, only a rough approximation of the wood stock and increment is used during the creation of NIR. However, these areas are expanding every year due to depopulation of the rural areas and the cessation of traditional farming.

very diverse stand structure of degraded forest areas (shrubbery and thickets)

Goal:
This study will focus on two things: (1) Developing regression models for biomass volume estimation in continental shrubberies and thickets based on airborne LiDAR data. To correlate LiDAR data with biomass volume, over 70 field plots with a radius of 12 meters have been established in more than 550 ha of the hilly and lowland shrubberies in Central Croatia and all trees and shrubberies above 1 cm Diameter at Breast Height (DBH) were recorded with information about tree species, DBH and height. Precise locations of the field plots are measured with survey GNNS and biomass is calculated with parameters from literature. For regression modeling, various statistics from the point clouds matching the field plots will be used (i.e. height percentiles, standard deviation, skewness, kurtosis, …). 2) Testing the developed models for different laser pulse densities to find out if there is a significant deviation from results if the LiDAR point cloud is thinner. This will be helpful for planning of the later scanning for the change detection (increment or degradation).

Data:
+
641 square km of discrete returns LiDAR data around the City of Zagreb, the capitol of Croatia (but since it is highly populated area, only the outskirts of the area will be used)
+ raw geo-referenced LAS files with up to 3 returns and an average last return point density of 1 pts/m².

LAStools processing:
1)
extract area of interest [lasclip or las2las]
2) create differently dense versions (for goal no. 2) [lasthin]
3) remove isolated noise points [lasnoise]
4) classify point clouds into ground and non-ground [lasground]
5) create a Digital Terrain Model (DTM) [las2dem]
6) compute height of points above the ground [lasheight]
7) classify point clouds into vegetation and other [lasclassify]
8) normalize height of the vegetation points [lasheight]
9) extract the areas of the field plots [lasclip]
10) compute various metrics for each plot [lascanopy]
11) convert LAZ to TXT for regression modeling in R [las2txt]


LASmoons: Elia Palop-Navarro

$
0
0

Elia Palop-Navarro (recipient of three LASmoons)
Research Unit in Biodiversity (UO-PA-CSIC)
University of Oviedo, SPAIN.

Background:
Old-growth forests play an important role in biodiversity conservation. However, long history of human transformation of the landscape has led to the existence of few such forests nowadays. Its structure, characterized by multiple tree species and ages, old trees and abundant deadwood, is particularly sensible to management practices (Paillet et al. 2015) and requires long time to recover from disturbance (Burrascano et al. 2013). Within protected areas we would expect higher proportions of old-growth forests since these areas are in principle managed to ensure conservation of natural ecosystems and processes. Nevertheless, most protected areas in the EU sustained use and exploitation in the past, or even still do.

lasmoons_elia_palopnavarro_0

Part of the study area. Dotted area corresponds to forest surface under protection.

Goal:
Through the application of a model developed in the study area, using public LiDAR and forest inventory data (Palop-Navarro et al. 2016), we’d like to know how much of the forest in a network of mountain protected areas retains structural attributes compatible with old-growth forests. The LiDAR processing tasks which LAStools will be used for involve a total of 614,808 plots in which we have to derive height metrics, such as mean or median canopy height and its variability.

Vegetation profile colored by height in a LiDAR sample of the study area.

Vegetation profile colored by height in a LiDAR sample of the study area.

Data:
+ Public LiDAR data that can be downloaded here with mean pulse density 0.5 points per square meter. This data has up to 5 returns and is already classified into ground, low, mid or high vegetation, building, noise or overlapped.
+ The area covers forested areas within protected areas in Cantabrian Mountains, occupying 1,207 km2.

LAStools processing:
1) quality checking of the data as described in several videos and blog posts [lasinfo, lasvalidate, lasoverlap, lasgrid, las2dem]
2) use existing ground classification (if quality suffices) to normalize the elevations of to heights above ground using tile-based processing with on-the-fly buffers of 50 meters to avoid edge artifacts [lasheight]
3) compute height-based forestry metrics (e.g. ‘-avg’, ‘-std’, and ‘-p 50’) for each plot in the study area [lascanopy]

References:
Burrascano, S., Keeton, W.S., Sabatini, F.M., Blasi, C. 2013. Commonality and variability in the structural attributes of moist temperate old-growth forests: a global review. Forest Ecology and Management 291:458-479.
Paillet, Y., Pernot, C., Boulanger, V., Debaive, N., Fuhr, M., Gilg, O., Gosselin, F. 2015. Quantifying the recovery of old-growth attributes in forest reserves: A first reference for France. Forest Ecology and Management 346:51-64.
Palop-Navarro, E., Bañuelos, M.J., Quevedo, M. 2016. Combinando datos lidar e inventario forestal para identificar estados avanzados de desarrollo en bosques caducifolios. Ecosistemas 25(3):35-42.



LASmoons: Chloe Brown

$
0
0

Chloe Brown (recipient of three LASmoons)
Geosciences, School of Geography
University of Nottingham, UK

Background:
Malaysia’s North Selangor peat swamp forest is experiencing rapid and large scale conversion of peat swampland to oil palm agriculture, contrary to prevailing environmental guidelines. Given the global importance of tropical peat lands, and the uncertainties surrounding historical and future oil palm development, quantifying the spatial distribution of ecosystem service values, such as climate mitigation, is key to understanding the trade-offs associated with anthropogenic land use change.
The study explores the capabilities and methods of remote sensing and field-based data sets for extracting relevant metrics for the assessment of carbon stocks held in North Selangor peat swamp forest reserve, estimating both the current carbon stored in the above and below ground biomass, as well as the changes in carbon stock over time driven by anthropogenic land use change. Project findings will feed directly into peat land management practices and environmental accounting in Malaysia through the Tropical Catchments Research Initiative (TROCARI), and support the Integrated Management Plan of the Selangor State Forest Department (see here for a sample).

some clever caption

Goal:
LiDAR data is now seen as the practical option when assessing canopy height over large scales (Fassnacht et al., 2014), with Lucas et al., (2008) believing LiDAR data to produce more accurate tree height estimates than those derived from manual field based methods. At this stage of the project, the goal is to produce a high quality LiDAR-derived Canopy Height Model (CHM) following the “pit-free” algorithm of Khosravipour et.al., 2014 using the LAStools software.

Data:
+ LiDAR provided by the Natural Environment Research Council (NERC) Airborne Research and Survey Facility’s 2014 Malaysia Campaign.
+ covers 685 square kilometers (closed source)
+ collected with Leica ALS50-II LiDAR system
+ average pulse spacing < 1 meter, average pulse density 1.8 per square meter

LAStools processing:
1) Create 1000 meter tiles with 35 meter buffer to avoid edge artifacts [lastile]
2) Remove noise points (class 7) that are already classified [las2las]
3) Classify point clouds into ground (class 2) and non-ground (class 1) [lasground]
4) Generate normalized above-ground heights [lasheight]
5) Create DSM and DTM [las2dem]
6) Generate a pit-free Canopy Height Model (CHM) as described here [lasthin, las2dem, lasgrid]
7) Generate a spike-free Canopy Height Model (CHM) as described here for comparison [las2dem]

References:
Fassnacht, F.E., Hartig, F., Latifi, H., Berger, C., Hernández, J., Corvalán, and P., Koch, B. (2014). Importance of sample size, data type and prediction method for remote sensing-based estimations of above-ground forest biomass. Remote Sensing. Environment. 154, 102–114.
Khosravipour, A., Skidmore, A. K., Isenburg, M., Wang, T., and Hussin, Y. A. (2014). Generating pit-free canopy height models from airborne LiDAR. Photogrammetric Engineering & Remote Sensing, 80(9), 863-872.
Lucas, R. M., Lee, A. C., and Bunting, P. J., (2008). Retrieving forest biomass through integration of casi and lidar data. International Journal of Remote Sensing, 29 (5), 1553-1577.


Leaked: “Classified LiDAR” of Pentagon in LAS 1.4 Format

$
0
0

LiDAR leaks have happened! Black helicopters are in the sky!  A few days ago a tiny tweet leaked the online location of “classified LiDAR” for Washington, DC. This LiDAR really is “classified” and includes an aerial scan of the Pentagon. For rogue scientists world-wide we offer a secret download link. It links to a file code-named ‘pentagon.laz‘ that contains the 8,044,789 “classified” returns of the Pentagon shown below. This “classified file” can be deciphered by any software with native LAZ support. It was encrypted with the “LAS 1.4 compatibility mode” of LASzip. The original LAS 1.4 content was encoded into a inconspicuous-looking LAZ file. New point attributes (such as the scanner channel) were hidden as “extra bytes” for fully lossless encryption. Use ‘laszip‘ to fully decode the original “classified” LAS 1.4 file … (-;

Seriously, a tiled LiDAR data set for the District of Columbia flown in 2015 is available for anyone to use on Amazon S3 with a very permissive open data license, namely the Creative Commons Attribution 3.0 License. The LiDAR coverage can be explored via this interactive map. The tiles are provided in LAS 1.4 format and use the new point type 6. We downloaded a few tiles near the White House, the Capitol, and the Pentagon to test the “native LAS 1.4 extension” of our LASzip compressor which will be released soon (a prototype for testing is already available). As these uncompressed LAS files are YUUUGE we use the command line utility ‘wget‘ for downloading. With option ‘-c’ the download continues where it left off in case the transfer gets interrupted.

LiDAR pulse density from 20 or less (blue) to 100 or more (red) pulses per square meter.

We use lasboundary to create labeled bounding boxes for display in Google Earth and lasgrid to a create false color visualization of pulse density with the command lines shown below. Pulse densities of 20 or below are mapped to blue. Pulse densities of 100 or above are mapped to red. We picked the min value 20 and the max value 100 for this false color mapping by running lasinfo with the ‘-cd’ option to compute an average pulse density and then refining the numbers experimentally. We also use lasoverlap to visualize how flightlines overlap and how well they align. Vertical differences of up to 20 cm are mapped to white and differences of 40 cm or more are mapped to saturated blue or red.

lasboundary -i *.las ^
            -use_bb ^
            -labels ^
            -odir quality -odix _bb -okml

lasgrid -i *.las ^
        -keep_last ^
        -point_density -step 2 ^
        -false -set_min_max 20 100 ^
        -odir quality -odix _d_20_100 -opng ^
        -cores 2

lasoverlap -i *.las ^
           -min_diff 0.2 -max_diff 0.4 ^
           -odir quality -opng ^
           -cores 2

The visualization of the pulse density and of the flightline overlap both show that there is no LiDAR for the White House or Capitol Hill. We will never know how tall the tomato and kale plants had grown in Michelle Obama’s organic garden on that day. Note that the White House and Capitol Hill were not simply “cut out”. Instead the flight plan of the survey plane was carefully designed to avoid these areas. Surprisingly, the Pentagon did not receive the same treatment and is (almost) fully included in the open LiDAR as mentioned in the dramatic first paragraph. Interesting is how the varying (tidal?) water level of the Potomac River shows up in the visualization of flightline miss-alignments.

There are a number of issues in these LiDAR files. The most serious ones are reported at the very end of this article. We will now scrutinize the partly-filled tile 2016.las close to the White House with only 11,060,334 returns. A lasvalidate check immediately reports three deviations from the LAS 1.4 specification:

lasvalidate -i 2016.las -o 2016_check.xml
  1. For proper LAS 1.4 files containing point type 6 through 10 all ‘legacy’ point counts in the LAS header should be set to 0. The following six fields in the LAS header should be zero for tile 2016.las (and all other tiles):
    + legacy number of point records
    + legacy number of points by return[0]
    + legacy number of points by return[1]
    + legacy number of points by return[2]
    + legacy number of points by return[3]
    + legacy number of points by return[4]
  2. There should not be any LiDAR return in a valid LAS file whose ‘number of returns of given pulse’ attribute is zero but there are 8 such points in tile 2016.las (and many more in various other tiles).
  3. There should not be any LiDAR return whose ‘return number’ attribute is larger than their ‘number of returns of given pulse’ attribute but there are 8 such points in tile 2016.las (and many more in various other tiles).

The first issue is trivial. There is an efficient in-place fix that does not require to rewrite the entire file using lasinfo with the following command line:

lasinfo -i 2016.las ^
        -nh -nv -nc ^
        -set_number_of_point_records 0 ^
        -set_number_of_points_by_return 0 0 0 0 0 ^

A quick check with las2txt shows us that the second and third issue are caused by the same eight points. Instead of writing an 8 for the ‘number of returns’ attribute the LAS file exporter must have written a 0 (marked in red for all eight returns) and instead of writing an 8 for the ‘return number’ attribute the LAS file exporter must have written a 1 (also marked in red). We can tell it from the true first return via its z coordinate (marked in blue) as the last return should be the lowest of all.

las2txt -i 2016.las ^
        -keep_number_of_returns 0 ^
        -parse xyzrnt ^
        -stdout
397372.70 136671.62 33.02 4 0 112813299.954811
397372.03 136671.64 28.50 5 0 112813299.954811
397371.28 136671.67 23.48 6 0 112813299.954811
397370.30 136671.68 16.86 7 0 112813299.954811
397369.65 136671.70 12.50 1 0 112813299.954811
397374.37 136671.58 44.17 3 0 112813299.954811
397375.46 136671.56 51.49 1 0 112813299.954811
397374.86 136671.57 47.45 2 0 112813299.954811

With las2las we can change the ‘number of returns’ from 0 to 8 using a ‘-filtered_transform’ as illustrated in the command line below. We suspect that higher number of returns such as 9 or 10 might have been mapped to 1 and 2. Fixing those as well as repairing the wrong return numbers will require a more complex tool. We would recommend to check all tiles with more scrutiny using the lasreturn tool. But wait … more return numbering issues are to come.

las2las -i 2016.las ^
        -keep_number_of_returns 0 ^
        -filtered_transform ^
        -set_extended_number_of_returns 8 ^
        -odix _fixed -olas

A closer look at the scan pattern reveals that the LiDAR survey was flown with a dual-beam system where two laser beams scan the terrain simultaneously. This is evident in the textual representation below as there are multiple “sets of returns” for the same GPS time stamp such as 112813952.110394. We group the returns from the two beams into an orange and a green group. Their coordinates show that the two laser beams point into different directions when they are simultaneously “shot” and therefore hit the terrain far apart from another.

las2txt -i 2016.las ^
        -keep_gps_time 112813952.110392 112813952.110396 ^
        -parse xyzlurntp ^
        -stdout
397271.40 136832.35 54.31 0 0 1 1 112813952.110394 117
397277.36 136793.35 38.68 0 1 1 4 112813952.110394 117
397277.35 136793.56 32.89 0 1 2 4 112813952.110394 117
397277.34 136793.88 24.13 0 1 3 4 112813952.110394 117
397277.32 136794.25 13.66 0 1 4 4 112813952.110394 117

The information about which point is from which beam is currently stored into the generic ‘user data’ attribute instead of into the dedicated ‘scanner channel’ attribute. This can be fixed with las2las as follows.

las2las -i 2016.las ^
        -copy_user_data_into_scanner_channel ^
        -set_user_data 0 ^
        -odix _fixed -olas

Unfortunately the LiDAR files have much more serious issues in the return numbering. It’s literally a “Total Disaster!” and “Sad!” as the US president will tweet shortly. After grouping all returns with the same GPS time stamp into an orange and a green group there is one more set of returns left unaccounted for.

las2txt -i 2016.las ^
        -keep_gps_time 112813951.416451 112813951.416455 ^
        -parse xyzlurntpi ^
        -stdout
397286.02 136790.60 45.90 0 0 1 4 112813951.416453 117 24
397286.06 136791.05 39.54 0 0 2 4 112813951.416453 117 35
397286.10 136791.51 33.34 0 0 3 4 112813951.416453 117 24
397286.18 136792.41 21.11 0 0 4 4 112813951.416453 117 0
397286.12 136791.75 30.07 0 0 1 1 112813951.416453 117 47
397291.74 136750.70 45.86 0 1 1 1 112813951.416453 117 105
las2txt -i 2016.las ^
        -keep_gps_time 112813951.408708 112813951.408712 ^
        -parse xyzlurntpi ^
        -stdout
397286.01 136790.06 45.84 0 0 1 4 112813951.408710 117 7
397286.05 136790.51 39.56 0 0 2 4 112813951.408710 117 15
397286.08 136790.96 33.33 0 0 3 4 112813951.408710 117 19
397286.18 136792.16 17.05 0 0 4 4 112813951.408710 117 0
397286.11 136791.20 30.03 0 0 1 2 112813951.408710 117 58
397286.14 136791.67 23.81 0 0 2 2 112813951.408710 117 42
397291.73 136750.16 45.88 0 1 1 1 112813951.408710 117 142

This can be visualized with lasview and the result is unmistakably clear: The return numbering is messed up. There should be one shot with five returns (not a group of four and a single return) in the first example. And there should be one shot with six returns (not a group of four and a group of two returns) in the second example. Such a broken return numbering results in extra first (or last) returns. These are serious issues that affect any algorithm that relies on the return numbering such as first-return DSM generation or canopy cover computation. Those extra returns will also make the pulse density appear higher and the pulse spacing appear tighter than they really are. The numbers from 20 (blue) to 100 (red) pulses per square meters in our earlier visualization are definitely inflated.

lasview -i 2016.las ^
        -keep_gps_time 112813951.416451 112813951.416455 ^
        -color_by_return

lasview -i 2016.las ^
        -keep_gps_time 112813951.408708 112813951.408712 ^
        -color_by_return

After all these troubles here something nice. Side-by-side a first-return TIN and a spike-free TIN (using a freeze of 0.8 m) of the center court cafe in the Pentagon. Especially given all these “fake first returns” in the Washington DC LiDAR we really need the spike-free algorithm to finally “Make a DSM great again!” … (-;

We would like to acknowledge the District of Columbia Office of the Chief Technology Officer (OCTO) for providing this data with a very permissive open data license, namely the Creative Commons Attribution 3.0 License.

 


Plots to Stands: Producing LiDAR Vegetation Metrics for Imputation Calculations

$
0
0

Some professionals in remote sensing find LAStools a useful tool to extract statistical metrics from LiDAR that are used to make estimations about a larger area of land from a small set of sample plots. Common applications are prediction of the timber volume or the above-ground biomass for entire forests based on a number of representative plots where exact measurements were obtained with field work. The same technique can also be used to make estimations about animal habitat or coconut yield or to classify the type of vegetation that covers the land. In this tutorial we describe the typical workflow for computing common metrics for smaller plots and larger areas using LAStools.

Download these six LiDAR tiles (1, 2, 3, 4, 5, 6) from a Eucalyptus plantation in Brazil to follow along the step by step instructions of this tutorial. This data is courtesy of Suzano Pulp and Paper. Please also download the two shapefiles that delineate the plots where field measurements were taken and the stands for which predictions are to be made. You should download version 170327 (or higher) of LAStools due to some recent bug fixes.

Quality Checking

Before processing newly received LiDAR data we always perform a quality check first. This ranges from visual inspection with lasview, to printing textual content reports and attribute histograms with lasinfo, to flight-line alignment checks with lasoverlap, pulse density and pulse spacing checks with lasgrid and las2dem, and completeness-of-returns check with lassort followed by lasreturn.

lasinfo -i tiles_raw\CODL0003-C0006.laz ^
        -odir quality -odix _info -otxt

The lasinfo report tells us that there is no projection information. However, we remember that this Brazilian data was in the common SIRGAS 2000 projection and try for a few likely UTM zones whether the hillshaded DSM produced by las2dem falls onto the right spot in Google Earth.

las2dem -i tiles_raw\CODL0003-C0006.laz ^
        -keep_first -thin_with_grid 1 ^
        -hillshade -epsg 31983 ^
        -o epsg_check.png

Hillshaded DSM and Google Earth imagery align for EPSG code 31983

The lasinfo report also tells us that the xyz coordinates are stored with millimeter resolution which is a bit of an overkill. For higher and faster LASzip compression we will later lower this to a more appropriate centimeter resolution. It further tells us that the returns are stored using point type 0 and that is a bit unfortunate. This (older) point type does not have a GPS time stamp so that some quality checks (e.g. “completeness of returns” with lasreturn) and operations (e.g. “resorting of returns into acquisition order” with lassort) will not be possible. Fortunately the min-max range of the ‘point source ID’ suggests that this point attribute is correctly populated with flightline numbers so that we can do a check for overlap and alignment of the different flightlines that contribute to the LiDAR in each tile.

lasoverlap -i tiles_raw\*.laz ^
           -min_diff 0.2 -max_diff 0.4 ^
           -epsg 31983 ^
           -odir quality -opng ^
           -cores 3

We run lasoverlap to visualize the amount of overlap between flightlines and the vertical differences between them. The produced images (see below) color code the number of flightlines and the maximum vertical difference between any two flightlines as seen below. Most of the area is cyan (2 flightlines) except in the bottom left where the pilot was sloppy and left some gaps in the yellow seams (3 flightlines) so that some spots are only blue (1 flightline). We also see that two tiles in the upper left are partly covered by a diagonal flightline. We will drop that flightline later to create a more uniform density.across the tiles. The mostly blue areas in the difference are mostly aligned with features in the landscape and less with the flightline pattern. Closer inspection shows that these vertical difference occur mainly in the dense old growth forests with species of different heights that are much harder to penetrate by the laser than the uniform and short-lived Eucalyptus plantation that is more of a “dead forest” with little undergrowth or animal habitat.

Interesting observation: The vertical difference of the lowest return from different flightlines computed per 2 meter by 2 meter grid cell could maybe be used a new forestry metric to help distinguish mono cultures from natural forests.

lasgrid -i tiles_raw\*.laz ^
        -keep_last ^
        -step 2 -point_density ^
        -false -set_min_max 10 20 ^
        -epsg 31983 ^
        -odir quality -odix _d_2m_10_20 -opng ^
        -cores 3

lasgrid -i tiles_raw\*.laz ^
        -keep_last ^
        -step 5 -point_density ^
        -false -set_min_max 10 20 ^
        -epsg 31983 ^
        -odir quality -odix _d_5m_10_20 -opng ^
        -cores 3

We run lasgrid to visualize the pulse density per 2 by 2 meter cell and per 5 by 5 meter cell. The produced images (see below) color code the number of last return per square meter. The impact of the tall Eucalyptus trees on the density per cell computation is evident for the smaller 2 meter cell size in form of a noisy blue/red diagonal in the top right as well as a noisy blue/red area in the bottom left. Both of those turn to a more consistent yellow for the density per cell computation with larger 5 meter cells. Immediately evident is the higher density (red) for those parts or the two tiles in the upper left that are covered by the additional diagonal flightline. The blueish area left to the center of the image suggests a consistently lower pulse density whose cause remains to be investigated: Lower reflectivity? Missing last returns? Cloud cover?

The lasinfo report suggests that the tiles are already classified. We could either use the ground classification provided by the vendor or re-classify the data ourselves (using lastilelasnoise, and lasground). We check the quality of the ground classification by visually inspecting a hillshaded DTM created with las2dem from the ground returns. We buffer the tiles on-the-fly for a seamless hillshade without artifacts along tile boundaries by adding ‘-buffered 25’ and ‘-use_orig_bb’ to the command-line. To speed up reading the 25 meter buffers from neighboring tiles we first create a spatial indexing with lasindex.

lasindex -i tiles_raw\*.laz ^
         -cores 3

las2dem -i tiles_raw\*.laz ^
        -buffered 25 ^
        -keep_class 2 -thin_with_grid 0.5 ^
        -use_orig_bb ^
        -hillshade -epsg 31983 ^
        -odir quality -odix _dtm -opng ^
        -cores 3

hillshaded DTM tiles generated with las2dem and on-the-fly buffering

The resulting hillshaded DTM shows a few minor issues in the ground classification but also a big bump (above the mouse cursor). Closer inspection of this area (you can cut it from the larger tile using the command-line below) shows that there is a group of miss-classified points about 1200 meters below the terrain. Hence, we will start from scratch to prepare the data for the extraction of forestry metrics.

las2las -i tiles_raw\CODL0004-C0006.laz ^
        -inside_tile 207900 7358350 100 ^
        -o bump.laz

lasview -i bump.laz

bump in hillshaded DTM caused by miss-classified low noise

Data Preparation

Using lastile we first tile the data into smaller 500 meter by 500 meter tiles with 25 meter buffer while flagging all points in the buffer as ‘withheld’. In the same step we lower the resolution to centimeter and put nicer a coordinate offset in the LAS header. We also remove the existing classification and classify all points that are much lower than the target terrain as class 7 (aka noise). We also add CRS information and give each tile the base name ‘suzana.laz’.

lastile -i tiles_raw\*.laz ^
        -rescale 0.01 0.01 0.01 ^
        -auto_reoffset ^
        -set_classification 0 ^
        -classify_z_below_as 500.0 7 ^
        -tile_size 500 ^
        -buffer 25 -flag_as_withheld ^
        -epsg 31983 ^
        -odir tiles_buffered -o suzana.laz

With lasnoise we mark the many isolated points that are high above or below the terrain as class 7 (aka noise). Using two tiles we played around with the ‘step’ parameters until we find good parameter settings. See the README of lasnoise for the exact meaning and the choice of parameters for noise classification. We look at one of the resulting tiles with lasview.

lasnoise -i tiles_buffered\*.laz ^
         -step_xy 4 -step_z 2 ^
         -odir tiles_denoised -olaz ^
         -cores 3

lasview -i tiles_denoised\suzana_206000_7357000.laz ^
        -color_by_classification ^
        -win 1024 192

noise points shown in pink: all points (top), only noise points (bottom)

Next we use lasground to classify the last returns into ground (2) and non-ground (1). It is important to ignore the noise points with classification 7 to avoid the kind of bump we saw in the vendor-delivered classification. We again check the quality of the computed ground classification by producing a hillshaded DTM with las2dem. Here the las2dem command-line is sightly different as instead of buffering on-the-fly we use the buffers stored with each tile.

lasground -i tiles_denoised\*.laz ^
          -ignore_class 7 ^
          -nature -extra_fine ^
          -odir tiles_ground -olaz ^
          -cores 3

las2dem -i tiles_ground\*.laz ^
        -keep_class 2 -thin_with_grid 0.5 ^
        -hillshade ^
        -use_tile_bb ^
        -odir quality -odix _dtm_new -opng ^
        -cores 3

Finally, with lasheight we compute how high each return is above the triangulated surface of all ground returns and store this height value in place of the elevation value into the z coordinate using the ‘-replace_z’ switch. This height-normalizes the LiDAR in the sense that all ground returns are set to an elevation of 0 while all other returns get an elevation relative to the ground. The result are height-normalized LiDAR tiles that are ready for producing the desired forestry metrics.

lasheight -i tiles_ground\*.laz ^
          -replace_z ^
          -odir tiles_normalized -olaz ^
          -cores 3
Metric Production

The tool for computing the metrics for the entire area as well as for the individual field plots is lascanopy. Which metrics are well suited for your particular imputation calculation is your job to determine. Maybe first compute a large number of them and then eliminate the redundant ones. Do not use any point from the tile buffers for these calculations. We had flagged them as ‘withheld’ during the lastile operation, so they are easy to drop. We also want to drop the noise points that were classified as 7. And we were planning to drop that additional diagonal flightline we noticed during quality checking. We generated two lasinfo reports with the ‘-histo point_source 1’ option for the two tiles it was covering. From the two histograms it was easy to see that the diagonal flightline has the number 31.

First we run lascanopy on the 11 plots that you can download here. When running on plots it makes sense to first create a spatial indexing with lasindex for faster querying so that only those tiny parts of the LAZ file need to be loaded that actually cover the plots.

lasindex -i tiles_normalized\*.laz ^
         -cores 3

lascanopy -i tiles_normalized\*.laz -merged ^
          -drop_withheld ^
          -drop_class 7 ^
          -drop_point_source 31 ^
          -lop WKS_PLOTS.shp ^
          -cover_cutoff 3.0 ^
          -cov -dns ^
          -height_cutoff 2.0 ^
          -c 2.0 999.0 ^
          -max -avg -std -kur ^
          -p 25 50 75 95 ^
          -b 30 50 80 ^
          -d 2.0 5.0 10.0 50.0 ^
          -o plots.csv

The resulting ‘plots.csv’ file you can easily process in other software packages. It contains one line for each polygonal plot listed in the shapefile that lists its bounding box followed by all the requested metrics. But is why is there a zero maximum height (marked in orange) for plots 6 though 10? All height metrics are computed solely from returns that are higher than the ‘height_cutoff’ that was set to 2 meters. We added the ‘-c 2.0 999.0’ absolute count metric to keep track of the number of returns used in these calculations. Apparently in plots 6 though 10 there was not a single return above 2 meters as the count (also marked in orange) is zero for all these plots. Turns out this Eucalyptus stand had recently been harvested and the new seedlings are still shorter than 2 meters.

more plots.csv
index,min_x,min_y,max_x,max_y,max,avg,std,kur,p25,p50,p75,p95,b30,b50,b80,c00,d00,d01,d02,cov,dns
0,206260.500,7358289.909,206283.068,7358312.477,11.23,6.22,1.91,2.26,4.71,6.01,7.67,9.5,26.3,59.7,94.2,5359,18.9,41.3,1.5,76.3,60.0
1,206422.500,7357972.909,206445.068,7357995.477,13.54,7.5,2.54,1.97,5.32,7.34,9.65,11.62,26.9,54.6,92.2,7030,12.3,36.6,13.3,77.0,61.0
2,206579.501,7358125.909,206602.068,7358148.477,12.22,5.72,2.15,2.5,4.11,5.32,7.26,9.76,46.0,73.7,97.4,4901,24.8,28.7,2.0,66.8,51.2
3,206578.500,7358452.910,206601.068,7358475.477,12.21,5.68,2.23,2.64,4.01,5.14,7.18,10.04,48.3,74.1,95.5,4861,25.7,26.2,2.9,68.0,50.2
4,206734.501,7358604.910,206757.068,7358627.478,15.98,10.3,2.18,2.64,8.85,10.46,11.9,13.65,3.3,27.0,91.0,4946,0.6,32.5,44.5,91.0,77.5
5,207043.501,7358761.910,207066.068,7358784.478,15.76,10.78,2.32,3.43,9.27,11.03,12.49,14.11,3.2,20.7,83.3,4819,1.5,24.7,51.0,91.1,76.8
6,207677.192,7359630.526,207699.760,7359653.094,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.0,0.0,0.0,0,0.0,0.0,0.0,0.0,0.0
7,207519.291,7359372.366,207541.859,7359394.934,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.0,0.0,0.0,0,0.0,0.0,0.0,0.0,0.0
8,207786.742,7359255.850,207809.309,7359278.417,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.0,0.0,0.0,0,0.0,0.0,0.0,0.0,0.0
9,208159.017,7358997.344,208181.584,7359019.911,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.0,0.0,0.0,0,0.0,0.0,0.0,0.0,0.0
10,208370.909,7358602.565,208393.477,7358625.133,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.0,0.0,0.0,0,0.0,0.0,0.0,0.0,0.0

Then we run lascanopy on the entire area and produce one raster per tile for each metric. Here we remove the buffered points with the ‘-use_tile_bb’ switch that also ensures that the produced rasters have exactly the extend of the tiles without buffers. Again, it is imperative that you download the version 170327 (or higher) of LAStools for this to work correctly.

lascanopy -version
LAStools (by martin@rapidlasso.com) version 170327 (academic)

lascanopy -i tiles_normalized\*.laz ^
          -use_tile_bb ^
          -drop_class 7 ^
          -drop_point_source 31 ^
          -step 10 ^
          -cover_cutoff 3.0 ^
          -cov -dns ^
          -height_cutoff 2.0 ^
          -c 2.0 999.0 ^
          -max -avg -std -kur ^
          -p 25 50 75 95 ^
          -b 30 50 80 ^
          -d 2.0 5.0 10.0 50.0 ^
          -odir tile_metrics -oasc ^
          -cores 3

The resulting rasters in ASC format can easily be previewed using lasview for some “sanity checking” that our metrics make sense and to get a quick overview about what these metrics look like.

lasview -i tile_metrics\suzana_*max.asc
lasview -i tile_metrics\suzana_*p95.asc
lasview -i tile_metrics\suzana_*p50.asc
lasview -i tile_metrics\suzana_*p25.asc
lasview -i tile_metrics\suzana_*cov.asc
lasview -i tile_metrics\suzana_*d00.asc
lasview -i tile_metrics\suzana_*d01.asc
lasview -i tile_metrics\suzana_*b30.asc
lasview -i tile_metrics\suzana_*b80.asc

The maximum height rasters are useful to inspect more closely as they will immediately tell us if there was any high noise point that slipped through the cracks. And indeed it happened as we see a maximum of 388.55 meters for of the 10 by 10 meter cells. As we know the expected height of the trees we could have added a ‘-drop_z_above 70’ to the lascanopy command line. Careful, however, when computing forestry metrics in strongly sloped terrains as the terrain slope can significantly lift up returns to heights much higher than that of the tree. This is guaranteed to happen for LiDAR returns from branches that are extending horizontally far over the down-sloped part of the terrain as shown in this paper here.

We did not use the shapefile for the stands in this exercise. We could have clipped the normalized LiDAR points to these stands using lasclip as shown in the GUI below before generating the raster metrics. This would have saved space and computation time as many of the LiDAR points lie outside of the stands. However, it might be better to do that clipping step on the rasters in whichever GIS software or statistics package you are using for the imputation computation to properly account for partly covered raster cells along the stand boundary. This could be subject of another blog article … (-:

not all LiDAR was needed to compute metrics for


LASmoons: Muriel Lavy

$
0
0

Muriel Lavy (recipient of three LASmoons)
RED (Risk Evaluation Dashboard) project
ISE-Net s.r.l, Aosta, ITALY.

Background:
The Aosta Valley Region is a mountainous area in the heart of the Alps. This region is regularly affected by hazard natural phenomena connected with the terrain geomorphometry and the climate change: snow avalanche, rockfalls and landslide.
In July 2016 a research program, funded by the European Program for the Regional Development, aims to create a cloud dashboard for the monitoring, the control and the analysis of several parameters and data derived from advanced sensors: multiparametrical probes, aerial and oblique photogrammetry and laser scanning. This tool will help the territory management agencies to improve the risk mitigation and management system.

The RIEGL VZ-4000 scanning the Aosta Valley Region in Italy.

Goal:
This study aims to classify the point clouds derived from aerial imagery integrated with laser scanning data in order to generate accurate DTM, DSM and Digital Snow Models. The photogrammetry data set was acquired with a Nikon D810 camera from an helicopter survey. The aim of further analysis is to detect changes of natural dynamic phenomena that have occurred via volume analysis and mass balance evaluation.

Data:
+ The photogrammetry data set was acquired with an RGB camera (Nikon D810) with a focal length equivalent of 50 mm from a helicopter survey: 1060 JPG images
+ The laser scanner data set was acquired using a Terrestrial Laser Scanner (RIEGL VZ-4000) combined with a Leica GNSS device (GS25) to georeference the project. The TLS dataset was then used as base reference to properly align and georeference the photogrammetry point cloud.

LAStools processing:
1) check the reference system and the point cloud density [lasinfo, lasvalidate]
2) remove isolated noise points [lasnoise]
3) classify point into ground and non-ground [lasground]
4) classify point clouds into vegetation and other [lasclassify]
5) create DTM and DSM  [las2dem, lasgrid, blast2dem]
6) produce 3D visualizations to facilitate the communication and the interaction [lasview]


LASmoons: Gudrun Norstedt

$
0
0

Gudrun Norstedt (recipient of three LASmoons)
Forest History, Department of Forest Ecology and Management
Swedish University of Agricultural Sciences, Umeå, Sweden

Background:
Until the end of the 17th century, the vast boreal forests of the interior of northern Sweden were exclusively populated by the indigenous Sami. When settlers of Swedish and Finnish ethnicity started to move into the area, colonization was fast. Although there is still a prospering reindeer herding Sami culture in northern Sweden, the old Sami culture that dominated the boreal forest for centuries or even millenia is to a large extent forgotten.
Since each forest Sami family formerly had a number of seasonal settlements, the density of settlements must have been high. However, only very few remains are known today. In the field, old Sami settlements can be recognized through the presence of for example stone hearths, storage caches, pits for roasting pine bark, foundations of certain types of huts, reindeer pens, and fences. Researchers of the Forest History section of the Department of Forest Ecology and Management have long been surveying such remains on foot. This, however, is extremely time consuming and can only be done in limited areas. Also, the use of aerial photographs is usually difficult due to dense vegetation. Data from airborne laser scanning should be the best way to find remains of the old forest Sami culture. Previous research has shown the possibilities of using airborne laser scanning data for detecting cultural remains in the boreal forest (Jansson et al., 2009; Koivisto & Laulamaa, 2012; Risbøl et al., 2013), but no studies have aimed at detecting remains of the forest Sami culture. I want to test the possibilities of ALS in this respect.

DTM from the Krycklan catchment, showing a row of hunting pits and (larger) a tar pit.

Goal:
The goal of my study is to test the potential of using LiDAR data for detecting cultural and archaeological remains on the ground in a forest area where Sami have been known to dwell during historical times. Since the whole of Sweden is currently being scanned by the National Land Survey, this data will be included. However, the average point density of the national data is only 0,5–1 pulses/m^2. Therefore, the study will be done in an established research area, the Krycklan catchment, where a denser scanning was performed in 2015. The Krycklan data set lacks ground point classification, so I will have to perform such a classification before I can proceed to the creation of a DTM. Having tested various kind of software, I have found that LAStools seems to be the most efficient way to do the job. This, in turn, has made me aware of the importance of choosing the right methods and parameters for doing a classification that is suitable for archaeological purposes.

Data:
The data was acquired with a multi-spectral airborne LiDAR sensor, the Optech Titan, and a Micro IRS IMU, operated on an aircraft flying at a height of about 1000 m and positioning was post-processed with the TerraPos software for higher accuracy.
The average pulse density is 20 pulse/m^2.
+ About 7 000 hectares were covered by the scanning. The data is stored in 489 tiles.

LAStools processing:
1) run a series of classifications of a few selected tiles with both lasground and lasground_new with various parameters [lasground and lasground_new]
2) test the outcomes by comparing it to known terrain to find out the optimal parameters for classifying this particular LiDAR point cloud for archaeological purposes.
3) extract the bare-earth of all tiles (using buffers!!!) with the best parameters [lasground or lasground_new]
4) create bare-earth terrain rasters (DTMs) and analyze the area [lasdem]
5) reclassify the airborne LiDAR data collected by the National Land Survey using various parameters to see whether it can become more suitable for revealing Sami cultural remains in a boreal forest landscape  [lasground or lasground_new]

References:
Jansson, J., Alexander, B. & Söderman, U. 2009. Laserskanning från flyg och fornlämningar i skog. Länsstyrelsen Dalarna (PDF).
Koivisto, S. & Laulamaa, V. 2012. Pistepilvessä – Metsien arkeologiset kohteet LiDAR-ilmalaserkeilausaineistoissa. Arkeologipäivät 2012 (PDF).
Risbøl, O., Bollandsås, O.M., Nesbakken, A., Ørka, H.O., Næsset, E., Gobakken, T. 2013. Interpreting cultural remains in airborne laser scanning generated digital terrain models: effects of size and shape on detection success rates. Journal of Archaeological Science 40:4688–4700.


LASmoons: Marzena Wicht

$
0
0

Marzena Wicht (recipient of three LASmoons)
Department of Photogrammetry, Remote Sensing and GIS
Warsaw University of Technology, Poland.

Background:
More than half of human population (Heilig 2012) suffers from many negative effects of living in cities: increased air pollution, limited access to the green areas, Urban Heat Island (UHI) and many more. To mitigate some of these effects, many ideas came up over the years: reducing the surface albedo, the idea of the Garden City, green belts, and so on. Increasing horizontal wind speed might actually improve both, the air pollution dispersion and the thermal comfort in urban areas (Gál & Unger 2009). Areas of low roughness promote air flow – discharging the city from warm, polluted air and supplying it with cool and fresh air – if they share specific parameters, are connected and penetrate the inner city with a country breeze. That is why mapping low roughness urban areas is important in better understanding urban climate.

Goal:
The goal of this study is to derive buildings (outlines and height) and high vegetation using LAStools and to use that data in mapping urban ventilation corridors for our case study area in Warsaw. There are many ways to map these; however using ALS data has certain advantages (Suder& Szymanowski 2014) in this case: DSMs can be easily derived, tree canopy (incl. height) can be joined to the analysis and buildings can be easily extracted. The outputs are then used as a basis for morphological analysis, like calculating frontal area index. LAStools has the considerable advantage of processing large quantities of data (~500 GB) efficiently.

Frontal area index calculation based on 3D building database

Data:
+ LiDAR provided by Central Documentation Center of Geodesy and Cartography
+ average pulse density 12 p/m^2
+ covers 517 km^2 (whole Warsaw)

LAStools processing:
1) quality checking of the data as described in several videos and blog posts [lasinfo, lasvalidate, lasoverlap, lasgrid, lasduplicate, lasreturnlas2dem]
2) reorganize data into sufficiently small tiles with buffers to avoid edge artifacts [lastile]
3) classify point clouds into vegetation and buildings [lasground, lasclassify]
4) normalize LiDAR heights [lasheight]
5) create triangulated, rasterized derivatives: DSM / DTM / nDSM / CHM [las2dem, blast2dem]
6) compute height-based metrics (e.g. ‘-avg’, ‘-std’, and ‘-p 50’) [lascanopy]
7) generate subsets during the workflow [lasclip]
8) generate building footprints [lasboundary]

References:
Heilig, G. K. (2012). World urbanization prospects: the 2011 revision. United Nations, Department of Economic and Social Affairs (DESA), Population Division, Population Estimates and Projections Section, New York.
Gal, T., & Unger, J. (2009). Detection of ventilation paths using high-resolution roughness parameter mapping in a large urban area. Building and Environment, 44(1), 198-206.
Suder, A., & Szymanowski, M. (2014). Determination of ventilation channels in urban area: A case study of Wroclaw (Poland). Pure and Applied Geophysics, 171(6), 965-975.


Integrating External Ground Points in Forests to Improve DTM from Dense-Matching Photogrammetry

$
0
0

The biggest problem of generating a Digital Terrain Model (DTM) from the photogrammetric point clouds that are produced from aerial imagery with dense-matching software such as SURE, Pix4D, or Photoscan is dense vegetation: when plants completely cover the terrain not a single point is generated on the ground. This is different for LiDAR point clouds as the laser can even penetrate dense multi-level tropical forests. The complete lack of ground points in larger vegetated areas such as closed forests or dense plantations means that the many processing workflows for vegetation analysis that have been developed for LiDAR cannot be used for photogrammetric point clouds  … unless … well unless we are getting those missing ground points some other way. In the following we see how to integrate external ground points to generate a reasonable DTM under a dense forest with LAStools. See this, this, this, this, and this article for further reading.

Here you can download the dense matching point cloud, the manually collected ground points, and the forest stand delineating polygon that we are using in the following example work flow:

We leave the usual inspection of the content with lasinfo and lasview that we always recommend on newly obtained data as an exercise to the reader. Using las2dem and lasgrid we created the Google Earth overlays shown above to visualize the extent of the dense matched point cloud and the distribution of the manually collected ground points:

las2dem -i DenseMatching.laz ^
        -thin_with_grid 1.0 ^
        -extra_pass ^
        -step 2.0 ^
        -hillshade ^
        -odix _hill_2m -opng

lasgrid -i ManualGround.laz ^
        -set_RGB 255 0 0 ^
        -step 10 -rgb ^
        -odix _grid_10m -opng

Attempts to ground-classify the dense matching point cloud directly are futile as there are no ground points under the canopy in the heavily forested area. Therefore 558 ground points were manually surveyed in the forest of interest that are around 50 to 120 meters apart from another. We show how to integrate these points into the dense matching point cloud such that we can successfully extract bare-earth information from the data.

In the first step we “densify” the manually collected ground points by interpolating them with triangles onto a raster of 2 meter resolution that we store as LAZ points with las2dem. You could consider other interpolation schemes to “densify” the ground points, here we use simple linear interpolation to prove the concept. Due to the varying distance between the manually surveyed ground points we allow interpolating triangles with edge lengths of up to 125 meters. These triangles then also cover narrow open areas next to the forest, so we clip the interpolated ground points against the forest stand delineating polygon with lasclip to classify those points that are really in the forest as “key points” (class 8) and all others as “noise” (class 7).

las2dem -i ManualGround.laz ^
        -step 2 ^
        -kill 125 ^
        -odix _2m -olaz

lasclip -i ManualGround_2m.laz ^
        -set_classification 7 ^
        -poly forest.shp ^
        -classify_as 8 -interior ^
        -odix _forest -olaz

Below we show the resulting densified ground points colored by elevation that survive the clipping against the forest stand delineating polygon and were classified as “key points” (class 8). The interpolated ground points in narrow open areas next to the forest that fall outside this polygon were classified as “noise” (class 7) and are shown in violet. They will be dropped in the next step.

We then merge the dense matching points with the densified manual ground points (while dropping all the violet points marked as noise) as input to lasthin and reclassify the lowest point per 1 meter by 1 meter with a temporary code (here we use class 9 that usually refers to “water”). Only the subset of lowest points that receives the temporary classification code 9 will be used for ground classification later.

lasthin -i DenseMatching.laz ^
        -i ManualGround_2m_forest.laz ^
        -drop_class 7 ^
        -merged ^
        -lowest -step 1 -classify_as 9 ^
        -o DenseMatchingAndDensifiedGround.laz

We use the GUI of lasview to pick several interesting areas for visual inspection. The selected points load much faster when the LAZ file is spatially indexed and therefore we first run lasindex. For better orientation we also load the forest stand delineating polygon as an overlay into the GUI.

lasindex -i DenseMatchingAndDensifiedGround.laz

lasview -i DenseMatchingAndDensifiedGround.laz -gui

We pick the area shown below that contains the target forest with manually collected and densified ground points and a forested area with only dense matching points. The difference could not be more drastic as the visualizations show.

Now we run ground classification using lasground with option ‘-town’ using only the points with the temporary code 9 by ignoring all other classifications 0 and 8 in the file. We leave the temporary classification code 9 unchanged for all the points that were not classified with “ground” code 2 so we can visualize later which ones those are.

lasground -i DenseMatchingAndDensifiedGround.laz ^
          -ignore_class 0 8 ^
          -town ^
          -non_ground_unchanged ^
          -o GroundClassified.laz

We again use the GUI of lasview to pick several interesting areas after running lasindex and again load the forest stand delineating polygon as an overlay into the GUI.

lasindex -i GroundClassified.laz

lasview -i GroundClassified.laz -gui

We pick the area shown below that contains all three scenarios: the target forest with manually collected and densified ground points, an open area with only dense matching points, and a forested area with only dense matching points. The result is as expected: in the target forest the manually collected ground points are used as ground and in the open area the dense-matching points are used as ground. But there is no useful ground in the other forested area.

Now we can compute the heights of the points above ground for our target forest with lasheight and either replace the z elevations in the file of store them separately as “extra bytes”. Then we can compute, for example, a Canopy Height Model (CHM) that color codes the height of the vegetation above the ground with lasgrid. Of course this will only be correct in the target forest where we have “good” ground but not in the other forested areas. We also compute a hillshaded DTM to be able to visually inspect the topography of the generated terrain model.

lasheight -i GroundClassified.laz ^
          -store_as_extra_bytes ^
          -o GroundClassifiedWithHeights.laz

lasgrid -i GroundClassifiedWithHeights.laz ^
        -step 2 ^
        -highest -attribute 0 ^
        -false -set_min_max 0 25 ^
        -o chm.png

las2dem -i GroundClassified.laz ^
        -keep_class 2 -extra_pass ^
        -step 2 ^
        -hillshade ^
        -o dtm.png

Here you can download the resulting color-coded CHM and the resulting hill-shaded DTM as Google Earth KMZ overlays. Clearly the resulting CHM is only meaningful in the target forest where we used the manually collected ground points to create a reasonable DTM. In the other forested areas the ground is only correct near the forest edges and gets worse with increasing distance from open areas. The resulting DTM exhibits some interesting looking  bumps in the middle of areas with manually collected ground point. Those are a result of using the dense-matching points as ground whenever their elevation is lower than that of the manually collected points (which is decided in the lasthin step). Whether those bumps represent true elevations of are artifacts of low erroneous elevation from dense-matching remains to be investigated.

For forests on complex and steep terrain the number of ground points that needs to be manually collected may make such an approach infeasible in practice. However, maybe you have another source of elevation, such as a low-resolution DTM of 10 or 25 meter provided by your local government. Or maybe even a high resolution DTM of 1 or 2 meter from a LiDAR survey you did several years ago. While the forest may have grown a lot in the past years, the ground under the forest will probably not have changed much …



LASmoons: Huaibo Mu

$
0
0

Huaibo Mu (recipient of three LASmoons)
Environmental Mapping, Department of Geography
University College London (UCL), UK

Background:
This study is a part of the EU-funded Metrology for Earth Observation and Climate project (MetEOC-2). It aims to combine terrestrial and airborne LiDAR data to estimate biomass and allometry for woodland trees in the UK. Airborne LiDAR can capture large amounts of data over large areas, while terrestrial LiDAR can provide much more details of high quality in specific areas. The biomass and allometry for individual specific tree species in 1 ha of Wytham Woods located about 5km north west of the University of Oxford, UK are estimated by combining both airborne and terrestrial LiDAR. Then the bias will be evaluated when estimation are applied on different levels: terrestrial LiDAR level, tree level, and area level. The goal are better insights and a controllable error range in the bias of biomass and allometry estimates for woodland trees based on airborne LiDAR.

Goal:
The study aims to find the suitable parameters of allometric equations for different specific species and establish the relationship between the tree height and stem diameter and crown diameter to be able to estimate the above ground biomass using airborne LiDAR. The biomass estimates under different levels are then compared to evaluate the bias and for the total 6ha of Wytham Woods for calibration and validation. Finally the results are to be applied to other woodlands in the UK. The LiDAR processing tasks for which LAStools are used mainly center around the creation of suitable a Canopy Height Model (CHM) from the airborne LiDAR.

Data:
+ Airborne LiDAR data produced by Professor David Coomes (University of Cambridge) with Airborne Research and Survey Facility (ARSF) Project code of RG13_08 in June 2014. The average point density is about 5.886 per m^2.
+ Terrestrial LiDAR data collected by UCL’s team leader by Dr. Mat Disney and Dr. Kim Calders in order to develop very detailed 3D models of the trees.
+ Fieldwork from the project “Initial Results from Establishment of a Long-term Broadleaf Monitoring Plot at Wytham Woods, Oxford, UK” by Butt et al. (2009).

LAStools processing:
1) check LiDAR quality as described in these videos and articles [lasinfo, lasvalidate, lasoverlap, lasgrid, las2dem]
2) classify into ground and non-ground points using tile-based processing  [lastile, lasground]
3) generate a Digital Terrain Model (DTM) [las2dem]
4) compute height of points and delete points higher than maximum tree height obtained from terrestrial LiDAR [lasheight]
5) convert points into disks with 10 cm diameter to conservatively account for laser beam width [lasthin]
6) generate spike-free Digital Surface Model (DSM) based on algorithm by Khosravipour et al. (2016) [las2dem]
7) create Canopy Height Model (CHM) by subtracting DTM from spike-free DSM [lasheight].

References:
Butt, N., Campbell, G., Malhi, Y., Morecroft, M., Fenn, K., & Thomas, M. (2009). Initial results from establishment of a long-term broadleaf monitoring plot at Wytham Woods, Oxford, UK. University Oxford, Oxford, UK, Rep.
Khosravipour, A., Skidmore, A.K., Isenburg, M., Wang, T.J., Hussin, Y.A., (2014). Generating pit-free Canopy Height Models from Airborne LiDAR. PE&RS = Photogrammetric Engineering and Remote Sensing 80, 863-872.
Khosravipour, A., Skidmore, A.K., Isenburg, M. and Wang, T.J. (2015) Development of an algorithm to generate pit-free Digital Surface Models from LiDAR, Proceedings of SilviLaser 2015, pp. 247-249, September 2015.
Khosravipour, A., Skidmore, A.K., Isenburg, M (2016) Generating spike-free Digital Surface Models using raw LiDAR point clouds: a new approach for forestry applications, (journal manuscript under review).


LASmoons: Chris J. Chandler

$
0
0

Chris J. Chandler (recipient of three LASmoons)
School of Geography
University of Nottingham, UNITED KINGDOM

Background:
Wetlands provide a range of important ecosystem services: they store carbon, regulate greenhouse gas emissions, provide flood protection as well as water storage and purification. Preserving these services is critical to achieve sustainable environmental management. Currently, mangrove forests are protected in Mexico, however, fresh water wetland forests, which also have high capacity for storing carbon both in the trees and in the soil, are not protected under present legislation. As a result, coastal wetlands in Mexico are threatened by conversion to grazing areas, drainage for urban development and pollution. Given these threats, there is an urgent need to understand the current state and distribution of wetlands to inform policy and protect the ecosystem services provided by these wetlands.
In this project we will combine field data collection, satellite data (i.e. optical remote sensing, radar and LiDAR remote sensing) and modelling to provide an integrated technology for assessing the value of a range of ecosystem services, tested to proof of concept stage based on carbon storage. The outcome of the project will be a tool for mapping the value of a range of ecosystem services. These maps will be made directly available to local stakeholders including policy makers and land users to inform policy regarding forest protection/legislation and aid development of financial incentives for local communities to protect these services.

Wetland classification in the Chiapas region of Mexico

Goal:
At this stage of the project we have characterized wetlands for three priority areas in Mexico (Pantanos de Centla, La Encrucijada and La Mancha). Next stage is the up scaling of the field data at the three study sites using LiDAR data for producing high quality Canopy Height Model (CHM), which has been of great importance for biomass estimation (Ferraz et al., 2016). A high quality CHM will be achieved using LAStools software.

Data:
+
LiDAR provided by the Mexican National Institute of Statistics and Geography (INEGI)
+ average height: 5500 m, mirror angle: +/- 30 degrees, speed: 190 knots
+ collected with Cessna 441, Conquest II system at 1 pts/m².

LAStools processing:
1)
create 1000 meter tiles with 35 meter buffer to avoid edge artifacts [lastile]
2) classify point clouds into ground and non-ground [lasground]
3) normalize height of points above the ground [lasheight]
4) create a Digital Terrain and Surface Model (DTM and DSM) [las2dem]
5) generate a spike-free Canopy Height Model (CHM) as described here and here [las2dem]
6) compute various metrics for each plot and the normalized tiles [lascanopy]

References:
Ferraz, A., Saatchi, S., Mallet, C., Jacquemoud S., Gonçalves G., Silva C.A., Soares P., Tomé, M. and Pereira, L. (2016). Airborne Lidar Estimation of Aboveground Forest Biomass in the Absence of Field Inventory. Remote Sensing, 8(8), 653.

LASmoons: Manuel Jurado

$
0
0

Manuel Jurado (recipient of three LASmoons)
Departamento de Ingeniería Topográfica y Cartografía
Universidad Politécnica de Madrid, SPAIN

Background:
The availability of LiDAR data is creating a lot of innovative possibilities in different fields of science, education, and other field of interests. One of the areas that has been deeply impacted by LiDAR is cartography and in particular one highly specialized sub-field of cartography in the domain of recreational and professional orienteering running: the production of high-quality maps for orienteering races (Ditz et al., 2014). These are thematic maps with a lot of fine detail which demands many hours of field work for the map maker. In order to reduce the fieldwork, LiDAR data obtained from Airborne Research Australia (ARA) is going to be used in order to obtain DEM and to extract features that must be included in these maps. The data will be filtered and processed with the help of LAStools.

Final map with symbolism typical for use in orienteering running

Goal:
The goal of this project is to extract either point (boulders, mounds), linear (contours, erosion gullies, cliffs) and area features (vegetation density) that should be drawn in a orienteering map derived from high-resolution LiDAR. Different LiDAR derived raster images are being created: 0.5m DTM, vegetation density (J. Ryyppo, 2013), slope, Sky-View factor (Ž. Kokalj et al., 2011), and shaded relief. The area used is in Renmark, South Australia and the produced map is going to be used for the Australian Orienteering Championships 2018.

Sky-View factor of DTM for same area as shown above.

Data:
+
4 square kilometers of airborne LiDAR data produced by Airborne Research Australia at 18 pulses per square meter using the full waveform scanning LiDAR Q680i-S laser scanner from RIEGL
+ 60 hours of check and validation work in the field

LAStools processing:
1) tile into 500 by 500 meter tiles with 20 meter buffer [lastile]
2) classify isolated points as noise [lasnoise]
3) classify point clouds into ground and non-ground [lasground]
4) create a Digital Terrain Model (DTM) [las2dem]
5) normalize height of points above the ground [lasheight]
6) compute vegetation density metrics [lascanopy]
7) create hillshades of the raster DTMs [blast2dem or GDAL]

References:
Ditz, Robert, Franz Glaner, and Georg Gartner. (2014). “Laser Scanning and Orienteering Maps.” Scientific Journal of Orienteering 19.1.
JRyyppo, Jarkko. (2013). “Karttapullautin vegetation mapping guide”.
Kokalj, Žiga, Zaksek, Klemen, and Oštir, Krištof. (2011). Application of sky-view factor for the visualization of historic landscape features in lidar-derived relief models. Antiquity. 85. 263-273.

LASmoons: Sebastian Kasanmascheff

$
0
0

Sebastian Kasanmascheff (recipient of three LASmoons)
Forest Inventory and Remote Sensing
Georg-August-Universität Göttingen, GERMANY

Background:
Forest inventories are the backbone of forest management in Germany. In most federal forestry administrations in Germany, they are performed every ten years in order to assure that logging activities are sustainable. The process involves trained foresters who visit each stand (i.e. an area where the forest is similar in terms of age structure and tree species) and perform angle count sampling as developed by Walter Bitterlich in 1984. In a second step the annual growth is calculated using yield tables and finally a harvest volume is derived. There are three particular reasons to investigate how remote sensing can be integrated in the current inventory system:

  1. The current process does not involve random sampling of the sampling points and thus does not offer any measure of the accuracy of the data.
  2. Forest engineers hardly ever rely on the inventory data as a stand-alone basis for logging planning. Most often they rely on intuition alone and on the total volume count that they have to deliver for a wider area every year.
  3. In the last ten years, the collection of high-resolution LiDAR data has become more cost-effective and most federal agencies in Germany have access to it.

In order to be able to integrate the available remote-sensing data for forest inventories in Germany, it is important to tell apart different tree species as well as estimate their volumes.

Hesse is one of the most forested federal states in Germany.

Goal:
The goal of this project is to perform an object-based classification of conifer trees in Northern Hesse based on high-resolution LiDAR and multi-spectral orthophotos. The first step is to delineate the tree crowns. The second step is to perform a semi-automated classification using the spectral signature of the different conifer species.

Data:
+
 DSM (1m), DTM (1m), DSM (0.2 m) of the study area
+ Stereo images with 0.2 m resolution
+ high-resolution LiDAR data (average 10 points/m²)
+ forest inventory data
+ vector files of the individual forest stands
+ ground control points (field data)
All of this data is provided by the Hessian Forest Agency (HessenForst).

LAStools processing:
1) merge and clip the LAZ files [las2las]
2) classify ground and non-ground points [lasground]
3) remove low and high outliers [lasheight, lasnoise]
4) identify buildings within the study area [lasclassify]
5) create a normalized point cloud [lasheight]
6) create a highest-return canopy height model (CHM) [lasthin, las2dem]
7) create a pit-free (CHM) with the spike-free algorithm [las2dem]

LASmoons: Martin Buchauer

$
0
0

Martin Buchauer (recipient of three LASmoons)
Cartography & Geomedia Technology
University of Applied Science Munich, GERMANY

Background:
Salt marsh areas provide numerous services such as natural flood defenses, carbon sequestration, agricultural services, and are a valuable coastal habitat for flora, fauna and humans. However, they are not only threatened by the constant rise of sea levels caused by global warming but also by human settlement in coastal areas. A sensible local coastal development is important as it may help to support the development and progression of stressed salt marshes.

Looking South you can see the salt marsh area next to a famous golf course with St Andrews in the background.

Goal:

This research aims to visualize and extract vegetation metrics as well as the temporal analysis of four salt marsh data sets which are derived from terrestrial laser scanning. Located at the South and North shore of the Eden Estuary near St Andrews, Scotland, the scans were acquired in the summer and winter of 2016. Ground based laser scanning is an ideal method of fully reconstructing vegetation structures as well as having the ability to retrieve meaningful metrics such as height, area, and vegetation density. Although this technology has frequently been applied in the area of forestry, its application to salt marsh areas has not yet fully explored.

Data:
+
 TLS data acquired with a Leica HDS6100 (average density of 38000 points/m²)
+ ground control points (field data)

LAStools processing:
1) check the quality of the LiDAR data [lasinfo, lasoverlap, lasgrid]
2) merge and retile the original data with buffers [lastile]
3) classify point clouds into ground and non-ground [lasthin, lasground]
4) create digital terrain (DTM) and digital surface models (DSM) [lasthin, las2dem, blast2dem]

Viewing all 25 articles
Browse latest View live




Latest Images