Cyrill Stachniss
Cyrill Stachniss
  • 468
  • 5 012 393

Відео

Talk by X. Zhong: 3D LiDAR Mapping in Dynamic Environments using a 4D Implicit Neural Rep. (CVPR'24)
Переглядів 1,1 тис.21 день тому
CVPR 2024 Talk by X. Zhong about the paper: X. Zhong, Y. Pan, C. Stachniss, and J. Behley, “3D LiDAR Mapping in Dynamic Environments using a 4D Implicit Neural Representation,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2024. PAPER: www.ipb.uni-bonn.de:25000/wp-content/papercite-data/pdf/zhong2024cvpr.pdf CODE: github.com/PRBonn/4dNDF
Talk by L. Nunes: Scaling Diffusion Models to Real-World 3D LiDAR Scene Completion (CVPR'24)
Переглядів 1 тис.28 днів тому
CVPR 2024 Talk by Lucas Nunes about the paper: L. Nunes, R. Marcuzzi, B. Mersch, J. Behley, and C. Stachniss, “Scaling Diffusion Models to Real-World 3D LiDAR Scene Completion,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2024. PDF: www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/nunes2024cvpr.pdf CODE: github.com/PRBonn/LiDiff
Talk by M. Sodano: Open-World Semantic Segmentation Including Class Similarity (CVPR'24)
Переглядів 1,5 тис.Місяць тому
CVPR 2024 Talk by Matteo Sodano about the paper: M. Sodano, F. Magistri, L. Nunes, J. Behley, and C. Stachniss, “Open-World Semantic Segmentation Including Class Similarity,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2024. PAPER: www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/sodano2024cvpr.pdf CODE: github.com/PRBonn/ContMAV
Talk by D. Casado Herraez: Radar-Only Odometry and Mapping for Autonomous Vehicles (ICRA'2024)
Переглядів 1,1 тис.Місяць тому
Talk at ICRA'2024 about the paper: D. Casado Herraez, M. Zeller, L. Chang, I. Vizzo, M. Heidingsfeld, and C. Stachniss, “Radar-Only Odometry and Mapping for Autonomous Vehicles,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2024. PDF: www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/casado-herraez2024icra.pdf
Trailer: High Precision Leaf Instance Segmentation in Point Clouds Obtained Under Real Field...
Переглядів 530Місяць тому
Trailer for the paper: E. Marks, M. Sodano, F. Magistri, L. Wiesmann, D. Desai, R. Marcuzzi, J. Behley, and C. Stachniss, “High Precision Leaf Instance Segmentation in Point Clouds Obtained Under Real Field Conditions,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 8, pp. 4791-4798, 2023. doi:10.1109/LRA.2023.3288383 PDF: www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/marks2023ra...
Talk by E. Marks: High Precision Leaf Instance Segmentation in Point Clouds Obtained Under Real...
Переглядів 355Місяць тому
Talk about the paper: E. Marks, M. Sodano, F. Magistri, L. Wiesmann, D. Desai, R. Marcuzzi, J. Behley, and C. Stachniss, “High Precision Leaf Instance Segmentation in Point Clouds Obtained Under Real Field Conditions,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 8, pp. 4791-4798, 2023. doi:10.1109/LRA.2023.3288383 PDF: www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/marks2023ral...
Trailer: Effectively Detecting Loop Closures using Point Cloud Density Maps
Переглядів 8912 місяці тому
Paper trailer for the work: S. Gupta, T. Guadagnino, B. Mersch, I. Vizzo, and C. Stachniss, “Effectively Detecting Loop Closures using Point Cloud Density Maps,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2024. PDF: www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/gupta2024icra.pdf CODE: github.com/PRBonn/MapClosures
Talk by S. Gupta: Effectively Detecting Loop Closures using Point Cloud Density Maps (ICRA'2024)
Переглядів 5362 місяці тому
Talk at ICRA'2024 about the paper: S. Gupta, T. Guadagnino, B. Mersch, I. Vizzo, and C. Stachniss, “Effectively Detecting Loop Closures using Point Cloud Density Maps,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2024. PDF: www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/gupta2024icra.pdf CODE: github.com/PRBonn/MapClosures
Trailer: LocNDF: Neural Distance Field Mapping for Robot Localization (RAL'24)
Переглядів 7652 місяці тому
Paper trailer for the work: L. Wiesmann, T. Guadagnino, I. Vizzo, N. Zimmerman, Y. Pan, H. Kuang, J. Behley, and C. Stachniss, “LocNDF: Neural Distance Field Mapping for Robot Localization,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 8, p. 4999-5006, 2023. doi:10.1109/LRA.2023.3291274 PDF: www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/wiesmann2023ral-icra.pdf CODE: github.com...
Talk by L. Wiesmann: , “LocNDF: Neural Distance Field Mapping for Robot Localization (RAL-ICAR'24)
Переглядів 1,2 тис.2 місяці тому
Talk about the paper: L. Wiesmann, T. Guadagnino, I. Vizzo, N. Zimmerman, Y. Pan, H. Kuang, J. Behley, and C. Stachniss, “LocNDF: Neural Distance Field Mapping for Robot Localization,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 8, p. 4999-5006, 2023. doi:10.1109/LRA.2023.3291274 PDF: www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/wiesmann2023ral-icra.pdf CODE: github.com/PRBon...
Trailer: Unsupervised Pre-Training for 3D Leaf Instance Segmentation (RAL'2023)
Переглядів 3952 місяці тому
Paper trailer about the work: G. Roggiolani, F. Magistri, T. Guadagnino, J. Behley, and C. Stachniss, “Unsupervised Pre-Training for 3D Leaf Instance Segmentation,” IEEE Robotics and Automation Letters (RA-L), vol. 8, pp. 7448-7455, 2023. doi:10.1109/LRA.2023.3320018 PDF: www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/roggiolani2023ral.pdf CODE: github.com/PRBonn/Unsupervised-Pre-Training-fo...
Talk by G. Roggiolani: Unsupervised Pre-Training for 3D Leaf Instance Segmentation (RAL-ICRA'24)
Переглядів 2342 місяці тому
Talk about the paper: G. Roggiolani, F. Magistri, T. Guadagnino, J. Behley, and C. Stachniss, “Unsupervised Pre-Training for 3D Leaf Instance Segmentation,” IEEE Robotics and Automation Letters (RA-L), vol. 8, pp. 7448-7455, 2023. doi:10.1109/LRA.2023.3320018 PDF: www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/roggiolani2023ral.pdf CODE: github.com/PRBonn/Unsupervised-Pre-Training-for-3D-Lea...
Trailer: Tree Instance Segmentation and Traits Estimation for Forestry Environments... (ICRA'24)
Переглядів 4962 місяці тому
Paper Trailer for the work: M. V. R. Malladi, T. Guadagnino, L. Lobefaro, M. Mattamala, H. Griess, J. Schweier, N. Chebrolu, M. Fallon, J. Behley, and C. Stachniss, “Tree Instance Segmentation and Traits Estimation for Forestry Environments Exploiting LiDAR Data ,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2024. PDF: www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/mall...
Talk by M. Malladi: Tree Instance Segmentation and Traits Estimation for Forestry Environments...
Переглядів 3052 місяці тому
ICRA'2024 Talk by Meher Malladi about the paper: M. V. R. Malladi, T. Guadagnino, L. Lobefaro, M. Mattamala, H. Griess, J. Schweier, N. Chebrolu, M. Fallon, J. Behley, and C. Stachniss, “Tree Instance Segmentation and Traits Estimation for Forestry Environments Exploiting LiDAR Data ,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2024. PDF: www.ipb.uni-bonn.de/wp-content/pa...
Talk by L. Chong: Unsupervised Generation of Labeled Training Images for Crop-Weed Segmentation...
Переглядів 3622 місяці тому
Talk by L. Chong: Unsupervised Generation of Labeled Training Images for Crop-Weed Segmentation...
Trailer: Unsupervised Generation of Labeled Training Images for Crop-Weed Segmentation in New ...
Переглядів 3012 місяці тому
Trailer: Unsupervised Generation of Labeled Training Images for Crop-Weed Segmentation in New ...
Talk by F. Magistri: Efficient and Accurate Transformer-Based 3D Shape Completion and Reconstruction
Переглядів 4192 місяці тому
Talk by F. Magistri: Efficient and Accurate Transformer-Based 3D Shape Completion and Reconstruction
Trailer: Efficient and Accurate Transformer-Based 3D Shape Completion and Reconstruction of Fruits..
Переглядів 4612 місяці тому
Trailer: Efficient and Accurate Transformer-Based 3D Shape Completion and Reconstruction of Fruits..
Trailer: Mask4D: End-to-End Mask-Based 4D Panoptic Segmentation for LiDAR Sequences (RAL'23/ICRA'24)
Переглядів 3472 місяці тому
Trailer: Mask4D: End-to-End Mask-Based 4D Panoptic Segmentation for LiDAR Sequences (RAL'23/ICRA'24)
Trailer: Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving (RAL'23/IROS'23)
Переглядів 4182 місяці тому
Trailer: Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving (RAL'23/IROS'23)
Talk by R. Marcuzzi: Mask4D: End-to-End Mask-Based 4D Panoptic Segmentation for LiDAR Data (ICRA'24)
Переглядів 6702 місяці тому
Talk by R. Marcuzzi: Mask4D: End-to-End Mask-Based 4D Panoptic Segmentation for LiDAR Data (ICRA'24)
Talk by M. Zeller: Radar Tracker: Moving Instance Tracking in Sparse and Noisy Radar Data (ICRA'24)
Переглядів 8222 місяці тому
Talk by M. Zeller: Radar Tracker: Moving Instance Tracking in Sparse and Noisy Radar Data (ICRA'24)
Trailer: Building Volumetric Beliefs for Dynamic Environments Exploiting Map-Based MOS (RAL'23)
Переглядів 4033 місяці тому
Trailer: Building Volumetric Beliefs for Dynamic Environments Exploiting Map-Based MOS (RAL'23)
Talk by B. Mersch: Building Volumetric Beliefs for Dynamic Environments Exploiting MOS (RAL'23)
Переглядів 3283 місяці тому
Talk by B. Mersch: Building Volumetric Beliefs for Dynamic Environments Exploiting MOS (RAL'23)
Trailer: Generalizable Stable Points Segmentation for 3D LiDAR Long-Term Localization (RAL'24)
Переглядів 6423 місяці тому
Trailer: Generalizable Stable Points Segmentation for 3D LiDAR Long-Term Localization (RAL'24)
Talk by M. Zeller: Radar Instance Transformer: Reliable Moving Instance Segmentation 4 Radar (T-RO)
Переглядів 6963 місяці тому
Talk by M. Zeller: Radar Instance Transformer: Reliable Moving Instance Segmentation 4 Radar (T-RO)
Self-Driving Cars: Radar Perception (Matthias Zeller)
Переглядів 1,9 тис.4 місяці тому
Self-Driving Cars: Radar Perception (Matthias Zeller)
Talk by R. Marcuzzi: Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving (RAL'23)
Переглядів 1,8 тис.9 місяців тому
Talk by R. Marcuzzi: Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving (RAL'23)
Talk by Y. Pan: Panoptic Mapping with Fruit Completion and Pose Estimation ... (IROS'23)
Переглядів 1,2 тис.9 місяців тому
Talk by Y. Pan: Panoptic Mapping with Fruit Completion and Pose Estimation ... (IROS'23)

КОМЕНТАРІ

  • @chaolinshi1816
    @chaolinshi1816 4 години тому

    very clear explained,thanks

  • @bithigh8301
    @bithigh8301 День тому

    Reinforcement learning lecture is coming?😊

  • @adityavardhanjain
    @adityavardhanjain 2 дні тому

    These are life savers

  • @iznasen
    @iznasen 3 дні тому

    nice!

  • @ilhamm1915
    @ilhamm1915 4 дні тому

    Amazing work! Hats off to you good sir, concise explanation of a very complex topic

  • @tejasstanley
    @tejasstanley 5 днів тому

    Hallo, are the slides for this lecture available online.

  • @Dyxuki
    @Dyxuki 7 днів тому

    is Graph slam only full (offline) slam or can it be online too?

  • @MultiHomestead
    @MultiHomestead 7 днів тому

    where can I find the lecture note?

  • @user-sd2cd2vj1f
    @user-sd2cd2vj1f 10 днів тому

    How do I preprocess the NGSIM dataset and implement vehicle trajectory prediction in Python?

  • @tangiergao7766
    @tangiergao7766 10 днів тому

    Thank you for sharing sir. I have a question: why is the complexity O(k^2.4 + n^2)? I understand that O(k^2.4) comes from the matrix inversion, but where does O(n^2) come from? Why isn't it something like O(n^2 * k + n * k^2) due to matrix multiplication?

  • @ajayiabdulmalik9446
    @ajayiabdulmalik9446 11 днів тому

    As a prospective PhD this is a rare gem and the channel in its entirety !!!

  • @hagenoneill9142
    @hagenoneill9142 11 днів тому

    What is the size of the model that performs this? How fast does it run?

  • @user-yk8yq5rn8v
    @user-yk8yq5rn8v 12 днів тому

    The time when 3D data will be actively used in generative models seems to be approaching.

  • @markopopoland
    @markopopoland 12 днів тому

    Excellent 👌

  • @geethanarayanan2896
    @geethanarayanan2896 13 днів тому

    Too good - I wish I had studied your videos 10 years ago when I was starting out. Somehow, the books don't give an intuitive picture making this a much more difficult area to approach than it should be. Prof. Stachniss, you should write a book with some good pen and paper and programming exercises. Forstner is probably the best right now. (I work in self driving cars, on BEV modelling, and LOVE this subject).

  • @sandman94
    @sandman94 15 днів тому

    Thank you, amazing explanation. 👍

  • @leecheng2005
    @leecheng2005 18 днів тому

    An outstanding lecture on template matching both in theoretical and in pratical.

  • @chasko9372
    @chasko9372 19 днів тому

    So is the initial input to both the KF and EKF the gaussian pdf functions or what else?

    • @CyrillStachniss
      @CyrillStachniss 18 днів тому

      Yes, you initial belief is Gaussian (but can have a high uncertainty/variance)

  • @BruJacksonS2
    @BruJacksonS2 19 днів тому

    Very your explanation! Thank you! You did not mentioned accuracy in your video, could you explain how accuracy is calculated in this kind of model? The math behind isn’t clear for me.

  • @elclay
    @elclay 19 днів тому

    Impressive work, could you please provide GitHub repository for reproducibility?

  • @VolumetricTerrain-hz7ci
    @VolumetricTerrain-hz7ci 23 дні тому

    There are unknown way to visualize subspace, or vector spaces. You can stretching the width of the x axis, for example, in the right line of a 3d stereo image, and also get depth, as shown below. L R |____| |______| TIP: To get the 3d depth, close one eye and focus on either left or right line, and then open it. This because the z axis uses x to get depth. Which means that you can get double depth to the image.... 4d depth??? :O

  • @adityavardhanjain
    @adityavardhanjain 24 дні тому

    I wish to apply this practically. I have only thought of the graph implementation.

  • @theotimed2613
    @theotimed2613 25 днів тому

    Really nice ! What is the cheapest LiDaR sensor chip for 3D indoor Mapping ?

  • @foadgaroosi4096
    @foadgaroosi4096 28 днів тому

    TNQ

  • @kaptorkin
    @kaptorkin 28 днів тому

    on a slide 18 (computing alpha_t). isn't alpha_t = argmin(alpha) sum(pa(alpha)) ?

  • @erfanamkh7220
    @erfanamkh7220 Місяць тому

    Great Work👍

  • @Jianju69
    @Jianju69 Місяць тому

    So clearly explained! Thank you.

  • @morrobotik8105
    @morrobotik8105 Місяць тому

    Thank you Professor.

  • @user-if1yt1vr2n
    @user-if1yt1vr2n Місяць тому

    Great work, Matteo! Cheers from Boulder

  • @Lee_Jaehwan
    @Lee_Jaehwan Місяць тому

    Hello, I enjoyed watching your video. And I have one question. In radar mapping, if you use distance thresholding like that, wouldn't newly observed objects not be mapped?

  • @sELFhATINGiNDIAN
    @sELFhATINGiNDIAN Місяць тому

    No

  • @koushikg1655
    @koushikg1655 Місяць тому

    Amazing

  • @alexander8908
    @alexander8908 Місяць тому

    I noticed LiDAR would most likely be something that is installed on top of a passenger car like a 'TAXI' sign, but that kind of appearance would not aesthetically compromise a TaxiBot design value. Couple of missing pieces in achieving autonomous level-5 practicallity would probably be : • multi weather urban road application under various kind of debris flying around the LiDAR 3D scope detection while in motion, and (but not limited to) • Black Box auto-pilot system capturing these events as & when the car is mobile on the road without time limiting factor on data memory captured. Do you reckon that a level-5 autonomous TaxiBot well equiped with Cameras + Radars + LiDARs + Black Box (w/ unlimited data time captured via cloud) ready to be rolled out on Chinese road (if citing for example Huawei level-5 software) ?

  • @mayanksharma7354
    @mayanksharma7354 Місяць тому

    Thank you for this video.

  • @akanguven114
    @akanguven114 Місяць тому

    I would be glad if s.o. has knowledge replies; should not we take the Jacobian of f w.r.t control signal u to get B_jacobian and take into account for adding the value B_Jacobian*u for calculating prediction of states ? At time 37:20 ? thanks.

  • @yonathanashebir6324
    @yonathanashebir6324 Місяць тому

    my most productive 5 min today

  • @squaidinkarts
    @squaidinkarts Місяць тому

    This is an amazing lecture, thank you!

  • @bakersaga8186
    @bakersaga8186 Місяць тому

    So is translation still linear, or also nonlinear like rotation

  • @ElyasafCohen-vg5xk
    @ElyasafCohen-vg5xk Місяць тому

    correction - Viola & Jones isn't from the nineties, the initial version is from 2001. anyways great lecture!

  • @kaptorkin
    @kaptorkin Місяць тому

    how to applay it in a case of mixed outliers/outliers-free measurements ?

    • @CyrillStachniss
      @CyrillStachniss Місяць тому

      You always need also outlier-free observations. If you have 100% outliers, no state estimator can do its job…

    • @kaptorkin
      @kaptorkin Місяць тому

      @@CyrillStachniss write it a little bit clearer. For example we have odometry measurements (they don't have outliers, and we decided their errors distribution as Gaussian one) and in the same time we have landmarks visual measurement (that contains outliers) . So we have 2 groups of measurements, without outliers and with outliers. In case we apply robust kernel for landmarks visual measurements and for odometry measurements we apply usual L2 kernel ? Or we just mix all kinds of measurements and apply one robust kernel for each kinds of measurements?

  • @user-td8vz8cn1h
    @user-td8vz8cn1h Місяць тому

    This video is amazing. Thanks for clear and concise explanation. Watched with pleasure

  • @kushalyarlagadda3522
    @kushalyarlagadda3522 Місяць тому

    May I know which sensors were used. Although the ARS-408 2D radar sensor was mentioned as an example there is nothing about the 3D radar sensor.

  • @geethanarayanan2896
    @geethanarayanan2896 Місяць тому

    So much wisdom in these videos. Bedtime entertainment,

  • @user-ct3pw5jd2s
    @user-ct3pw5jd2s Місяць тому

    thank you , that's amizing , do you have dataset

  • @user-eh5zk5bb9k
    @user-eh5zk5bb9k Місяць тому

    nice!

  • @mackenzieking4208
    @mackenzieking4208 2 місяці тому

    big up cyrill

  • @akanguven114
    @akanguven114 2 місяці тому

    Also we use the nonlinear function equation for computing the prediction state values around the linearization point right? In every iteration we put the estimated values inside the nonlinear function as to compute the state to use them inside the prediction vector which is the first step of EKF right? Thank you

  • @IvanIvanov-dk6sm
    @IvanIvanov-dk6sm 2 місяці тому

    How did you fine tune params of Kiss-ICP on Mulran Kaist 03 ? i launched several times kiss-icp on this dataset and the odometry was awful. At high speed of the car the scan matching did not work at all.

    • @saurabhgupta5662
      @saurabhgupta5662 2 місяці тому

      Hello, we did not do any fine tuning on Kiss ICP for any MulRAN sequence used in our work. It gave us good odometry results with their default parameters.

    • @IvanIvanov-dk6sm
      @IvanIvanov-dk6sm 2 місяці тому

      ​@@saurabhgupta5662thank you!

  • @dodeakim
    @dodeakim 2 місяці тому

    😍