-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.xml
442 lines (365 loc) · 33.8 KB
/
index.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>吴言吴语</title>
<link>https://wym.netlify.app/</link>
<description>Recent content on 吴言吴语</description>
<generator>Hugo -- gohugo.io</generator>
<language>en</language>
<copyright>wu</copyright>
<lastBuildDate>Sun, 20 Aug 2017 21:38:52 +0800</lastBuildDate>
<atom:link href="https://wym.netlify.app/index.xml" rel="self" type="application/rss+xml" />
<item>
<title> 📜 论文阅读 | ClusterVO:对移动实例进行聚类并估算自身和周围环境的视觉里程计</title>
<link>https://wym.netlify.app/2020-05-05-cluster-vo/</link>
<pubDate>Tue, 05 May 2020 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2020-05-05-cluster-vo/</guid>
<description><blockquote>
<p><strong>ClusterVO:对移动实例进行聚类并估计自身和周围环境的视觉里程计</strong><br />
Huang J, Yang S, Mu T J, et al. <a href="https://arxiv.org/pdf/2003.12980"><strong>ClusterVO: Clustering Moving Instances and Estimating Visual Odometry for Self and Surroundings</strong></a>[J]. arXiv preprint arXiv:2003.12980, <strong>2020</strong>. (CVPR 2020)<br />
清华大学胡事民教授;<a href="https://www.youtube.com/watch?v=paK-WCQpX-Y&amp;feature=youtu.be">演示视频</a><br />
前期研究:Huang J, Yang S, Zhao Z, et al. <a href="http://openaccess.thecvf.com/content_ICCV_2019/papers/Huang_ClusterSLAM_A_SLAM_Backend_for_Simultaneous_Rigid_Body_Clustering_and_ICCV_2019_paper.pdf"><strong>ClusterSLAM: A SLAM Backend for Simultaneous Rigid Body Clustering and Motion Estimation</strong></a>[C]//Proceedings of the IEEE International Conference on Computer Vision. <strong>2019</strong>: 5875-5884.</p>
</blockquote></description>
</item>
<item>
<title> ⭐ 总结 | SLAM 领域国内外优秀实验室汇总</title>
<link>https://wym.netlify.app/2020-04-26-slam-lab/</link>
<pubDate>Sun, 26 Apr 2020 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2020-04-26-slam-lab/</guid>
<description>知乎原文:SLAM 领域国内外优秀实验室汇总 | PDF 版本 ,同步发布于微信公众号【泡泡机器人 SLAM】 摘自我的 Github 仓库:wuxiaolang/Visu</description>
</item>
<item>
<title> ⭐ 总结 | 92 项开源视觉 SLAM 项目够你用了吗?</title>
<link>https://wym.netlify.app/2020-03-31-open-slam-code/</link>
<pubDate>Tue, 31 Mar 2020 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2020-03-31-open-slam-code/</guid>
<description>知乎原文:92 项开源视觉 SLAM 项目够你用了吗? | PDF 版本 摘自我的 Github 仓库:wuxiaolang/Visual_SLAM_Related_Resear</description>
</item>
<item>
<title> 📜 论文阅读 | 使用低精度 GPS 和 2.5D 建筑模型实现基于 SLAM 的户外定位</title>
<link>https://wym.netlify.app/2020-03-13-building-models/</link>
<pubDate>Fri, 13 Mar 2020 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2020-03-13-building-models/</guid>
<description><blockquote>
<p><strong>使用低精度 GPS 和 2.5D 建筑模型实现基于单目 SLAM 的户外定位</strong><br />
Liu R, Zhang J, Chen S, et al. <a href="https://ieeexplore.ieee.org/abstract/document/8943728/"><strong>Towards SLAM-based outdoor localization using poor GPS and 2.5 D building models</strong></a>[C]//2019 IEEE International Symposium on Mixed and Augmented Reality (<strong>ISMAR</strong>). IEEE, <strong>2019</strong>: 1-7.<br />
浙江工业大学、汉堡大学;<a href="https://github.com/lauchlry/Buiding-GPS-SLAM"><strong>代码开源</strong></a><br />
前期研究:Arth C, Pirchheim C, Ventura J, et al. <a href="https://ieeexplore.ieee.org/abstract/document/7164332/"><strong>Instant outdoor localization and slam initialization from 2.5 d maps</strong></a>[J]. IEEE transactions on visualization and computer graphics, <strong>2015</strong>, 21(11): 1309-1318.</p>
</blockquote></description>
</item>
<item>
<title> 📜 论文阅读 | 特征因子:多帧和时间连续点云对齐的平面估计</title>
<link>https://wym.netlify.app/2019-10-19-eigen-factors/</link>
<pubDate>Sat, 19 Oct 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-10-19-eigen-factors/</guid>
<description><blockquote>
<p><strong>特征因子:多帧和时间连续点云对齐的平面估计</strong><br />
Ferrer G. <a href="http://sites.skoltech.ru/app/data/uploads/sites/50/2019/07/ferrer2019planes.pdf"><strong>Eigen-Factors: Plane Estimation for Multi-Frame and Time-Continuous Point Cloud Alignment</strong></a>[C]. <strong>IROS 2019</strong><br />
俄罗斯斯科尔科沃科技学院,三星 <a href="https://gitlab.com/gferrer/eigen-factors-iros2019"><strong>代码开源</strong></a> <a href="https://www.youtube.com/watch?v=_1u_c43DFUE&amp;feature=youtu.be">演示视频</a></p>
</blockquote></description>
</item>
<item>
<title> 📜 论文阅读 | 在非参数和聚类的 SLAM 中使用类别物体进行定位</title>
<link>https://wym.netlify.app/2019-07-12-nonparametric-statistics-and-clustering/</link>
<pubDate>Fri, 12 Jul 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-07-12-nonparametric-statistics-and-clustering/</guid>
<description><blockquote>
<p><strong>在非参数和聚类的 SLAM 中使用类别物体进行定位</strong><br />
Iqbal A, Gans N R. <a href="https://ieeexplore.ieee.org/abstract/document/8593541/"><strong>Localization of Classified Objects in SLAM using Nonparametric Statistics and Clustering</strong></a>[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (<strong>IROS</strong>). IEEE, <strong>2018</strong>: 161-168.<br />
德克萨斯大学计算机工程学院</p>
</blockquote></description>
</item>
<item>
<title> 😀 ORB-SLAM2 代码解读(三):优化 2(详解 + g2o 使用)</title>
<link>https://wym.netlify.app/2019-07-05-orb-slam2-optimization2/</link>
<pubDate>Fri, 05 Jul 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-07-05-orb-slam2-optimization2/</guid>
<description>0. 基本使用 0.1 构造 g2o 模型 首先构造 g2o 模型,包括选择线性方程求解器、矩阵求解器和下降算法; 1 2 3 4 5 6 7 8 9 10 11 12 13 // 设置图模型创建优化器. g2o::SparseOptimizer optimizer; //</description>
</item>
<item>
<title> 😀 ORB-SLAM2 代码解读(三):优化 1(概述)</title>
<link>https://wym.netlify.app/2019-07-03-orb-slam2-optimization1/</link>
<pubDate>Wed, 03 Jul 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-07-03-orb-slam2-optimization1/</guid>
<description>1. ORB-SLAM2 中优化的变量和误差 ORB-SLAM2 采用非线性优化的方式进行 BA 优化,由于 BA 的稀疏性(具体表现为雅克比矩阵和 H 矩阵的稀疏性),可以由图优化(将优化表示为图</description>
</item>
<item>
<title>非线性优化之高斯牛顿法、L-M 算法</title>
<link>https://wym.netlify.app/2019-07-01-nonlinear-optimization/</link>
<pubDate>Mon, 01 Jul 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-07-01-nonlinear-optimization/</guid>
<description>1. 状态估计中的最小二乘问题 1.1 贝叶斯法则 SLAM 要解决的两个问题是定位(求解相机位姿)和建图(求解路标的位置),因此可以分别用一个运动方程和观测方程</description>
</item>
<item>
<title> 😀 ORB-SLAM2 代码解读(三):单目初始化</title>
<link>https://wym.netlify.app/2019-06-17-orb-slam2-monocular-initialization/</link>
<pubDate>Mon, 17 Jun 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-06-17-orb-slam2-monocular-initialization/</guid>
<description>单目初始化通过并行地计算基础矩阵 F 和单应矩阵 H ,恢复出最开始两帧的匹配、相机初始位姿,三角化得到 MapPoints 的深度,获得初始化点云地图,并对恢复的点云</description>
</item>
<item>
<title> 😀 ORB-SLAM2 代码解读(三):特征提取</title>
<link>https://wym.netlify.app/2019-06-16-orb-slam2-features/</link>
<pubDate>Sun, 16 Jun 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-06-16-orb-slam2-features/</guid>
<description>特征匹配 1. 初始化时两帧之间的特征匹配 SearchForInitialization() 在单目初始化时,对用于初始化的连续两帧特征点数大于 100 的图像处理,取出图像金字塔第 0 层(即原图)的特征点</description>
</item>
<item>
<title>2019 年 6 月论文泛读(21篇)</title>
<link>https://wym.netlify.app/2019-06-10-skim/</link>
<pubDate>Mon, 10 Jun 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-06-10-skim/</guid>
<description><blockquote>
<p><strong>6 项开源代码工作</strong>:用于跟踪与建图的模块化优化框架 用于室内 RGB-D 重建的基于平面的几何和纹理优化 ReFusion:利用残差的 RGB-D 相机动态环境下的三维重建 学习双目,推断单目:用于自我监督,单目,深度估计的连体网络 用于地面机器人的 RGBD-惯导轨迹估计与建图 从单个深度图像完成语义场景理解
<strong>其他</strong>:将基于线的特定类别物体模型集成到单目 SLAM 中 基于鲁棒的物体 SLAM 的高速导航系统 无组织点云中平面检测的定向点采样</p>
</blockquote></description>
</item>
<item>
<title> 📜 论文阅读 | 将基于线的特定类别物体模型集成到单目 SLAM 中</title>
<link>https://wym.netlify.app/2019-06-09-line-based-object-slam/</link>
<pubDate>Sun, 09 Jun 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-06-09-line-based-object-slam/</guid>
<description><blockquote>
<p><strong>将基于线的特定类别物体模型集成到单目 SLAM 中</strong><br />
Joshi N, Sharma Y, Parkhiya P, et al. <a href="https://arxiv.org/pdf/1905.04698.pdf"><strong>Integrating Objects into Monocular SLAM: Line Based Category Specific Models</strong></a>[J]. arXiv preprint arXiv:1905.04698, <strong>2019</strong>.<br />
作者:印度海德拉巴大学 <a href="https://robotics.iiit.ac.in/publications.html">实验室主页</a><br />
前期工作:Parkhiya P, Khawad R, Murthy J K, et al. <a href="https://arxiv.org/pdf/1802.09292.pdf"><strong>Constructing Category-Specific Models for Monocular Object-SLAM</strong></a>[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 1-9.</p>
</blockquote></description>
</item>
<item>
<title> 📜 论文阅读 | 用于室内 RGB-D 重建的基于平面的几何和纹理优化</title>
<link>https://wym.netlify.app/2019-06-06-plane-reconstruction/</link>
<pubDate>Thu, 06 Jun 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-06-06-plane-reconstruction/</guid>
<description><blockquote>
<p><strong>用于室内 RGB-D 重建的基于平面的几何和纹理优化</strong><br />
Wang C, Guo X. <a href="https://arxiv.org/pdf/1905.08853.pdf"><strong>Efficient Plane-Based Optimization of Geometry and Texture for Indoor RGB-D Reconstruction</strong></a>[J]. arXiv preprint arXiv:1905.08853, <strong>2019</strong>.<br />
作者:德克萨斯大学达拉斯分校 <a href="https://scholar.google.com/citations?user=PXm3u3gAAAAJ&amp;hl=zh-CN&amp;oi=sra">Google Scholor</a><br />
<a href="https://github.com/chaowang15/plane-opt-rgbd"><strong>代码开源</strong></a></p>
</blockquote></description>
</item>
<item>
<title> 😀 ORB-SLAM2 代码解读(二):闭环检测线程</title>
<link>https://wym.netlify.app/2019-05-30-orb-slam2-loop/</link>
<pubDate>Thu, 30 May 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-05-30-orb-slam2-loop/</guid>
<description>0. 闭环检测线程介绍 通过检测闭环来消除 SLAM 系统的累计误差是比较直接且有效的方式,在局部建图线程处理完每一帧关键帧序列之后会将该关键帧保存到 mlploopKeyFrameQueue 队列</description>
</item>
<item>
<title> 😀 ORB-SLAM2 代码解读(二):局部建图线程</title>
<link>https://wym.netlify.app/2019-05-30-orb-slam2-mapping/</link>
<pubDate>Thu, 30 May 2019 00:00:00 +2000</pubDate>
<guid>https://wym.netlify.app/2019-05-30-orb-slam2-mapping/</guid>
<description>0. 局部建图线程介绍 在 Tracking 线程中每次跟踪成功之后会判断是否将当前帧作为关键帧并送入到局部建图线程,关键帧的判断在 Tracking 线程中进行,但关键帧、地图点插</description>
</item>
<item>
<title> 😀 ORB-SLAM2 代码解读(二):可视化线程</title>
<link>https://wym.netlify.app/2019-05-28-orb-slam2-viewer/</link>
<pubDate>Tue, 28 May 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-05-28-orb-slam2-viewer/</guid>
<description>0. 可视化线程介绍 可视化线程用于显示 3D 地图绘制器和 2D 图像帧绘制器,还包括一些运行模式的开关,不涉及到具体的算法,只负责接受、传递和显示数据,不</description>
</item>
<item>
<title>2019 年 5 月论文泛读(下) Learning SLAM & Others(6+20)</title>
<link>https://wym.netlify.app/2019-05-27-skim3/</link>
<pubDate>Mon, 27 May 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-05-27-skim3/</guid>
<description><blockquote>
<p><strong>Learning SLAM &amp; Others</strong>
多视角立体重建的条件<strong>单视图外形生成</strong> 代码开源 Pointflownet:从点云学习<strong>刚体运动估计</strong>的表示 代码开源
三维点云的无监督稳定<strong>兴趣点检测</strong> 代码开源 基于图的视觉惯性导航的<strong>封闭式预积分</strong>方法 代码开源 事件相机</p>
</blockquote></description>
</item>
<item>
<title>2019 年 5 月论文泛读(中) AR & MR & VR(8篇)</title>
<link>https://wym.netlify.app/2019-05-25-skim2/</link>
<pubDate>Sat, 25 May 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-05-25-skim2/</guid>
<description><blockquote>
<p><strong>二、AR &amp; MR &amp; VR</strong>
DAQRI 智能眼镜<strong>远程稠密重建</strong>交互 基于三维点云真实环境的<strong>虚拟对象替换</strong> <strong>网易开源单目深度估计、稠密重建增强现实</strong></p>
</blockquote></description>
</item>
<item>
<title>2019 年 5 月论文泛读(上) Geometric SLAM(16篇)</title>
<link>https://wym.netlify.app/2019-05-15-skim1/</link>
<pubDate>Wed, 15 May 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-05-15-skim1/</guid>
<description><blockquote>
<p><strong>一、Geometric SLAM</strong>
日本国家先进工业科学技术研究所<strong>极密特征视觉 SLAM</strong> <strong>开源直接法稀疏建图</strong> <strong>线模型</strong>约束单目漂移
快速 RGB-D 建图的相关<strong>粗糙 3D 表示</strong> <strong>苏黎世开源</strong>室外大场景点云重建 CMU <strong>局部最小化求解</strong></p>
</blockquote></description>
</item>
<item>
<title> 📜 论文阅读 | 使用非参数位姿图的物体 SLAM</title>
<link>https://wym.netlify.app/2019-05-07-nonparametric-pose-graph/</link>
<pubDate>Tue, 07 May 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-05-07-nonparametric-pose-graph/</guid>
<description><blockquote>
<p><strong>使用非参数位姿图的物体 SLAM</strong><br />
Mu B, Liu S Y, Paull L, et al. <a href="https://arxiv.org/pdf/1704.05959.pdf"><strong>Slam with objects using a nonparametric pose graph</strong></a>[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (<strong>IROS</strong>). IEEE, <strong>2016</strong>: 4602-4609.<br />
<strong>作者</strong>:<a href="http://acl.mit.edu/publications">麻省理工学院航空航天控制实验室</a><br />
<a href="https://github.com/BeipengMu/objectSLAM">开源代码</a> <a href="https://www.youtube.com/watch?v=YANUWdVLJD4&amp;feature=youtu.be">演示视频</a></p>
</blockquote></description>
</item>
<item>
<title> 😀 ORB-SLAM2 代码解读(二):跟踪线程</title>
<link>https://wym.netlify.app/2019-04-27-orb-slam2-tracking/</link>
<pubDate>Sat, 27 Apr 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-04-27-orb-slam2-tracking/</guid>
<description>0. 跟踪线程总体介绍 Tracking 线程运行在系统主线程中,负责对每帧图像进行特征提取、位姿估计、地图跟踪、关键帧选取等工作,可以简单理解为 SLAM 的前端里程计部</description>
</item>
<item>
<title> 📜 论文阅读 | 隐私保护:利用线云进行基于图像的定位</title>
<link>https://wym.netlify.app/2019-04-06-privacy-preserving/</link>
<pubDate>Sat, 06 Apr 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-04-06-privacy-preserving/</guid>
<description><blockquote>
<p><strong>基于图像隐私保护的定位</strong><br />
Pablo Speciale, Johannes L. Schonberg, Sing Bing Kang. <a href="https://arxiv.org/pdf/1903.05572.pdf"><strong>Privacy Preserving Image-Based Localization</strong></a>[J] <strong>2019</strong>.<br />
<strong>作者</strong>:<strong>苏黎世</strong>联邦理工、微软,<a href="http://people.inf.ethz.ch/sppablo/">作者主页</a>,<a href="https://www.cvg.ethz.ch/research/secon/">工程地址</a>, 实验室主页:<a href="https://www.cvg.ethz.ch/publications/">计算机视觉与几何课题组</a></p>
</blockquote></description>
</item>
<item>
<title>2019 年 4 月论文泛读(17 篇)</title>
<link>https://wym.netlify.app/2019-04-01-skim/</link>
<pubDate>Mon, 01 Apr 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-04-01-skim/</guid>
<description>1. SlamCraft:单目平面稠密 SLAM [1] Rambach J, Lesur P, Pagani A, et al. SlamCraft: Dense Planar RGB Monocular SLAM[C]. International Conference on Machine Vision Applications MVA 2019. + ==SlamCraft:单目平面稠密 SLAM== + Jason Rambach</description>
</item>
<item>
<title> 😀 ORB-SLAM2 代码解读(一):从 mono_tum.cc 走一遍系统</title>
<link>https://wym.netlify.app/2019-03-20-orb-slam2-overview/</link>
<pubDate>Wed, 20 Mar 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-03-20-orb-slam2-overview/</guid>
<description>注:本文从 mono_tum.cc 文件开始分析(单目) ORB-SLAM2 的完整流程,重点关注的步骤和执行顺序,函数的具体实现大部分只是略讲,后面系列的笔记会有详细的解读。 ORB-SLAM2 从 mono_tum.cc 开</description>
</item>
<item>
<title> 📜 论文阅读 | 视觉 SLAM 的可学习线段描述符</title>
<link>https://wym.netlify.app/2019-03-14-learnable-line/</link>
<pubDate>Thu, 14 Mar 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-03-14-learnable-line/</guid>
<description><blockquote>
<p><strong>视觉 SLAM 的可学习线段描述符</strong><br />
Vakhitov A, Lempitsky V. <a href="https://ieeexplore.ieee.org/abstract/document/8651490/"><strong>Learnable Line Segment Descriptor for Visual SLAM</strong></a>[J]. IEEE Access, <strong>2019</strong>.<br />
<strong>作者</strong>:<strong>Alexander Vakhitov</strong> <a href="https://scholar.google.com/citations?user=g_2iut0AAAAJ&amp;hl=zh-CN&amp;oi=sra"><strong>谷歌学术</strong></a> <strong>Victor Lempitsky</strong> <a href="https://scholar.google.com/citations?user=gYYVokYAAAAJ&amp;hl=zh-CN&amp;oi=sra"><strong>谷歌学术</strong></a><br />
三星 AI 实验室(莫斯科) <a href="https://sites.google.com/site/alexandervakhitov/"><strong>作者主页</strong></a> <a href="https://ieeexplore.ieee.org/abstract/document/8651490/media#media">演示视频</a><br />
<strong>期刊</strong>:IEEE Access 开源期刊,JCR分区:Q1 IF:4.199<br />
作者另外几篇<strong>点线结合</strong>的论文:<br />
ECCV 2016:<a href="https://link.springer.com/chapter/10.1007/978-3-319-46478-7_36">Accurate and linear time pose estimation from points and lines</a><br />
ICRA 2017:<a href="https://ieeexplore.ieee.org/abstract/document/7989522">PL-SLAM: Real-time monocular visual SLAM with points and lines</a><br />
ECCV 2018:<a href="http://openaccess.thecvf.com/content_ECCV_2018/html/Alexander_Vakhitov_Stereo_relative_pose_ECCV_2018_paper.html">Stereo relative pose from line and point feature triplets</a></p>
</blockquote></description>
</item>
<item>
<title> 😜 YOLO 批量处理 TUM、KITTI 数据集并保存检测结果</title>
<link>https://wym.netlify.app/2019-03-11-yolo-dataset/</link>
<pubDate>Mon, 11 Mar 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-03-11-yolo-dataset/</guid>
<description>0. 主要工作: 代码:https://github.com/wuxiaolang/darknet 在 darknet.c 中添加了 detect_tum_batch 命令处理 tum 和自制数据集,添加 detect_kitti_batch 命令</description>
</item>
<item>
<title> 📜 论文阅读 | 使用物体补充的 BA 来恢复单目 SLAM 的稳定尺度</title>
<link>https://wym.netlify.app/2019-02-27-object-ba/</link>
<pubDate>Wed, 27 Feb 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-02-27-object-ba/</guid>
<description><blockquote>
<p><strong>使用物体补充的 BA 来恢复单目 SLAM 的稳定尺度</strong><br />
Frost D, Prisacariu V, Murray D. <a href="https://ieeexplore.ieee.org/abstract/document/8353862"><strong>Recovering stable scale in monocular SLAM using object-supplemented bundle adjustment</strong></a>[J]. IEEE Transactions on Robotics, <strong>2018</strong>, 34(3): 736-747.<br />
<strong>作者</strong>:<strong>Duncan Frost</strong>:牛津大学2017年博士毕业,好像是 PTAM 那个组的 <a href="https://scholar.google.com/citations?user=P9l4zHIAAAAJ&amp;hl=zh-CN&amp;oi=sra"><strong>谷歌学术</strong></a><br />
<strong>期刊</strong>:IEEE Transactions on Robotics JCR 类别:ROBOTICS 排序:2/26 JCR分区:Q1 IF:4.684<br />
<strong>文章</strong>:作者 2016 年 <a href="https://ieeexplore.ieee.org/abstract/document/7487680/">Object-aware bundle adjustment for correcting monocular scale drift</a></p>
</blockquote></description>
</item>
<item>
<title> 😜 Cube SLAM 代码总结:如何从 2D 目标检测恢复 3D 物体位姿</title>
<link>https://wym.netlify.app/2019-02-22-cubeslam/</link>
<pubDate>Fri, 22 Feb 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-02-22-cubeslam/</guid>
<description>注:🌐 Cube SLAM 系列论文,代码注释、总结汇总 原始代码:https://github.com/shichaoy/cube_slam 个人注释:http</description>
</item>
<item>
<title> 📜 论文阅读 | Quadric SLAM:以目标检测获得的对偶二次曲面为面向物体 SLAM 的路标</title>
<link>https://wym.netlify.app/2019-01-28-quadric-slam/</link>
<pubDate>Mon, 28 Jan 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-01-28-quadric-slam/</guid>
<description><blockquote>
<p><strong>Quadric SLAM:以目标检测获得的对偶二次曲面为面向物体 SLAM 的路标</strong><br />
Nicholson L, Milford M, Sünderhauf N. <a href="https://ieeexplore.ieee.org/abstract/document/8440105"><strong>Quadricslam: Dual quadrics from object detections as landmarks in object-oriented slam</strong></a>[J]. IEEE Robotics and Automation Letters, <strong>2019</strong>, 4(1): 1-8.<br />
关于<strong>作者</strong>:<br />
<a href="https://www.roboticvision.org/">昆士兰科技大学澳大利亚机器人视觉中心</a><br />
一作:<strong>Lachlan Nicholson</strong> <a href="https://scholar.google.com/citations?user=DkyLABAAAAAJ&amp;hl=zh-CN&amp;oi=sra"><strong>谷歌学术</strong></a><br />
二作:<strong>Michael Milford</strong>(Rat SLAM 的提出者) <a href="https://scholar.google.com/citations?user=TDSmCKgAAAAJ&amp;hl=zh-CN&amp;oi=sra"><strong>谷歌学术</strong></a><br />
三作:<strong>Niko Sünderhauf</strong> (Suenderhauf)(感觉这个更大佬) <a href="https://scholar.google.com/citations?user=WnKjfFEAAAAJ&amp;hl=zh-CN"><strong>谷歌学术</strong></a> <a href="https://nikosuenderhauf.github.io/publications/"><strong>个人主页</strong></a></p>
</blockquote></description>
</item>
<item>
<title> 😜 Cube SLAM 代码注释</title>
<link>https://wym.netlify.app/2019-01-17-cubeslam-code/</link>
<pubDate>Thu, 17 Jan 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-01-17-cubeslam-code/</guid>
<description>原始代码:https://github.com/shichaoy/cube_slam 个人注释:https://github.com/wuxi</description>
</item>
<item>
<title> 📜 论文阅读 | Pop-up SLAM:面向低纹理环境下的单目平面语义SLAM</title>
<link>https://wym.netlify.app/2019-01-11-pup-up-slam/</link>
<pubDate>Fri, 11 Jan 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-01-11-pup-up-slam/</guid>
<description><blockquote>
<p><strong>Pop-up SLAM:面向低纹理环境下的单目平面语义SLAM</strong><br />
Yang S, Song Y, Kaess M, et al. <a href="https://arxiv.org/pdf/1703.07334"><strong>Pop-up slam: Semantic monocular plane slam for low-texture environments</strong></a>[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (<strong>IROS</strong>). IEEE, <strong>2016</strong>: 1222-1229.<br />
作者:<strong>YangShichao</strong>:<a href="http://www.frc.ri.cmu.edu/~syang/"><strong>个人主页</strong></a> <a href="https://scholar.google.com/citations?user=xWtRvrMAAAAJ&amp;hl=zh-CN&amp;oi=sra"><strong>Google Scholar</strong></a> <a href="https://github.com/shichaoy"><strong>Github</strong></a><br />
卡内基梅隆大学机器人研究所:<a href="https://www.ri.cmu.edu/"><strong>The Robotics Institute of CUM</strong></a><br />
演示视频:<a href="https://www.youtube.com/watch?v=TOSOWdxmtkw">https://www.youtube.com/watch?v=TOSOWdxmtkw</a></p>
</blockquote></description>
</item>
<item>
<title> 📜 论文阅读 | 结构化环境中单目物体与平面SLAM</title>
<link>https://wym.netlify.app/2019-01-06-object-plane-slam/</link>
<pubDate>Sun, 06 Jan 2019 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2019-01-06-object-plane-slam/</guid>
<description><blockquote>
<p><strong>结构化环境中单目物体级与平面级的SLAM</strong><br />
Yang S, Scherer S. <strong><a href="https://arxiv.org/pdf/1809.03415.pdf">Monocular Object and Plane SLAM in Structured Environments</a></strong>[J]. arXiv preprint arXiv:1809.03415, <strong>2018</strong>.<br />
作者:YangShichao:<a href="http://www.frc.ri.cmu.edu/~syang/"><strong>个人主页</strong></a> <a href="https://scholar.google.com/citations?user=xWtRvrMAAAAJ&amp;hl=zh-CN&amp;oi=sra"><strong>Google Scholar</strong></a> <a href="https://github.com/shichaoy"><strong>Github</strong></a><br />
卡内基梅隆大学机器人研究所:<a href="https://www.ri.cmu.edu/"><strong>The Robotics Institute of CUM</strong></a><br />
演示视频:<a href="https://www.youtube.com/watch?v=jzBMsKCm0uk&amp;t=11s">https://www.youtube.com/watch?v=jzBMsKCm0uk&amp;t=11s</a></p>
</blockquote></description>
</item>
<item>
<title> 📜 论文阅读 | CubeSLAM:单目 3D 物体检测与没有先验模型的 SLAM</title>
<link>https://wym.netlify.app/2018-11-30-cubeslam/</link>
<pubDate>Fri, 30 Nov 2018 00:00:00 +0800</pubDate>
<guid>https://wym.netlify.app/2018-11-30-cubeslam/</guid>
<description><blockquote>
<p><strong>CubeSLAM:单目 3D 物体检测与没有先验模型的 SLAM</strong><br />
Yang S, Scherer S. <a href="https://arxiv.org/abs/1806.00557"><strong>CubeSLAM: Monocular 3D Object Detection and SLAM without Prior Models</strong></a>[J]. arXiv preprint arXiv:1806.00557, <strong>2018</strong>.<br />
作者: YangShichao:<a href="http://www.frc.ri.cmu.edu/~syang/"><strong>个人主页</strong></a> <a href="https://scholar.google.com/citations?user=xWtRvrMAAAAJ&amp;hl=zh-CN&amp;oi=sra"><strong>Google Scholar</strong></a> <a href="https://github.com/shichaoy"><strong>Github</strong></a><br />
卡内基梅隆大学机器人研究所:<a href="https://www.ri.cmu.edu/"><strong>The Robotics Institute of CUM</strong></a><br />
演示视频:<a href="https://www.youtube.com/watch?v=QnVlexXi9_c">https://www.youtube.com/watch?v=QnVlexXi9_c</a></p>
</blockquote></description>
</item>
<item>
<title>SLAM</title>
<link>https://wym.netlify.app/slam/</link>
<pubDate>Sun, 20 Aug 2017 21:38:52 +0800</pubDate>
<guid>https://wym.netlify.app/slam/</guid>
<description>Github:视觉 SLAM 相关研究跟踪 知乎:开源视觉 SLAM 方案汇总 知乎:SLAM 领域国内外优秀实验室汇总 🌍 ORB-SLAM2 系列 ORB-SLAM2 代码解读(一):从 mono_tum.cc 走一遍系统 ORB-SLAM2 代码</description>
</item>
<item>
<title>about</title>
<link>https://wym.netlify.app/about/</link>
<pubDate>Sun, 20 Aug 2017 21:38:52 +0800</pubDate>
<guid>https://wym.netlify.app/about/</guid>
<description>About me 机器人科学与工程专业在读硕士研究生,感兴趣方向为视觉 SLAM、增强现实,当前关注物体级 SLAM、多路标 SLAM 和轻量化建图。 喜欢电影,推理、犯</description>
</item>
<item>
<title></title>
<link>https://wym.netlify.app/za/</link>
<pubDate>Sun, 20 Aug 2017 21:38:52 +0800</pubDate>
<guid>https://wym.netlify.app/za/</guid>
<description>杂·记</description>
</item>
</channel>
</rss>