You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Point Cloud Library (http://docs.pointclouds.org/1.7.0/group__sample__consensus.html) is popular for making sense of all the pixels that cameras give you to represent 3D space. For our collagens we wanted to model a cylinder to represent or three intervoven chains to determine the central axis. But once you have come up to the abstraction of shape detection for protein worlds, one can do so much more and, in particular, also tap into all the wealth of code and skills in the robotics world, especially also with the hindsight of synthetic biology. So, my proposal is to think of (aka "ask for some grant money for") interlinking atom clouds with robotic vision.
Some initial setup to get a collagen structure read:
#!/usr/bin/python
from BALL import *
system=System()
collagenPDBfile=PDBFile("1CAG.pdb")
p=Protein()
collagenPDBfile.read(p)
Extraction of all atoms of the first three chains - punish me for not using your iterators
a=[]
for i in range(0,3):
c=p.getChain(i)
print "Chain %s: %d residues, %d atoms" % (c.getName(),c.countResidues(),c.countAtoms())
for j in range(0,c.countResidues()):
r=c.getResidue(j)
print " "+r.getName()
#for atomtype in ["CA"]:
#for atomtype in ["N"]:
for atomtype in ["CA","C","N"]:
rpos=r.getAtom(atomtype).getPosition()
a.append([rpos.x,rpos.y,rpos.z])
And the transformation into a PointCloud of the pcl module. It is all frightingly sensitive to the parameter settings, but at least one gets to something reasonable, eventually:
import pcl
pc=pcl.PointCloud(a)
seg = pc.make_segmenter_normals(ksearch=50)
seg.set_optimize_coefficients(True)
seg.set_model_type(pcl.SACMODEL_CYLINDER)
seg.set_normal_distance_weight(0.1)
seg.set_method_type(pcl.SAC_RANSAC)
seg.set_max_iterations(20000)
seg.set_distance_threshold(0.3)
seg.set_radius_limits(1, 3.5)
indices, model = seg.segment()
# Models are
# http://docs.pointclouds.org/1.7.0/group__sample__consensus.html
# [point_on_axis.x point_on_axis.y point_on_axis.z axis_direction.x axis_direction.y axis_direction.z radius]
print len(a)
print model
print indices
Hoping for some fruitful thought exchange in this thread.
Best,
Steffen
The text was updated successfully, but these errors were encountered:
Hello,
The Point Cloud Library (http://docs.pointclouds.org/1.7.0/group__sample__consensus.html) is popular for making sense of all the pixels that cameras give you to represent 3D space. For our collagens we wanted to model a cylinder to represent or three intervoven chains to determine the central axis. But once you have come up to the abstraction of shape detection for protein worlds, one can do so much more and, in particular, also tap into all the wealth of code and skills in the robotics world, especially also with the hindsight of synthetic biology. So, my proposal is to think of (aka "ask for some grant money for") interlinking atom clouds with robotic vision.
To give you an impression, I followed this example code https://github.com/strawlab/python-pcl/blob/master/examples/segment_cyl_plane.py of an independently developed Python interface (Debian package about to be uploaded) for the above mentioned application of ours:
Some initial setup to get a collagen structure read:
Extraction of all atoms of the first three chains - punish me for not using your iterators
And the transformation into a PointCloud of the pcl module. It is all frightingly sensitive to the parameter settings, but at least one gets to something reasonable, eventually:
Hoping for some fruitful thought exchange in this thread.
Best,
Steffen
The text was updated successfully, but these errors were encountered: