"I'm a bit unhappy with your code, because it's so hard to read tbh. You don't like objects?" I would phrase this differently now: I think you can improve your code by using objects when they are appropriate. :-) 2010/3/6 Ian Mallett <geometrian@gmail.com>:
Unfortunately, the pure Python implementation is actually an order of magnitude faster. The fastest solution right now is to use numpy for the transformations, then convert it back into a list (.tolist()) and use Python for the rest.
:-( But well, if it's faster, than do it that way, right? I can only guess, that for large datasets, the comparison to compare each and every vector "in parallel" makes it slow down. So an iterative approach might be fine. In fact, no one would implement such things in C++ using not lists with pushback(), I guess. A bit of commentary about your code:
Here's the actual Python code.
def glLibInternal_edges(object,lightpos): edge_set = set([])
Where do you use edge_set? Btw, set() would do it.
edges = {} for sublist in xrange(object.number_of_lists): #There's only one sublist here face_data = object.light_volume_face_data[sublist] for indices in face_data: #v1,v2,v3,n
Here objects would fit in nicely. for indices in face_data: normal = object.transformed_normals[sublist][indices.nindex] (v1, v2, v3) = [object.transformed_vertices[sublist][vidx] for vidx in indices.vindices]
normal = object.transformed_normals[sublist][indices[3]] v1,v2,v3 = [ object.transformed_vertices[sublist][indices[i]] for i in xrange(3) ] if abs_angle_between_rad(normal,vec_subt(v1,lightpos))<pi_over_two:
v1, lightpos can be stored as numpy ndarrays, no? If you have a list of vectors in an ndarray arr, you can convert to a list of ndarrays by using list(arr). This will only iterate over the first dimension, and not recurse into the vectors in arr. Provided this, you can simply write: if (normal * (v1 - lightpos) / numpy.sqrt(((v1 - lightpos) ** 2).sum())) > 0: (...)
for p1,p2 in [[indices[0],indices[1]], [indices[1],indices[2]], [indices[2],indices[0]]]: edge = [p1,p2]
Why not writing: for edge in numpy.asarray([[indices[0], indices[1]], (...)]): (...)
index = 0
Where do you use index? It's a lonely remnant?
edge2 = list(edge)
Why do you convert the unmodified edge list into a list again? Btw, I found your numbering quite a bit annoying. No one can tell from an appended 2 what purpose that xxx2 has. Furthermore, I think a slight speedup could be reached by: unique = (egde.sum(), abs((egde * [1, -1]).sum()))
edge2.sort() edge2 = tuple(edge2) if edge2 in edges: edges[edge2][1] += 1 else: edges[edge2] = [edge,1]
edges2 = [] for edge_data in edges.values(): if edge_data[1] == 1: p1 = object.transformed_vertices[sublist][edge_data[0][0]] p2 = object.transformed_vertices[sublist][edge_data[0][1]] edges2.append([p1,p2]) return edges2
My 2 cents: class CountingEdge: def __init__(self, indices): self.indices = indices self.count = 0 edges = {} (...) edges.setdefault(unique, CountingEdge(edge)) edges[unique].count += 1 for counting_edge in edges.values(): if counting_edge.count == 1: edges_clean.append(object.transformed_vertices[sublist][counting_edge.indices]) provided that transformed_vertices[sublist] is an ndarray. You can iterate over ndarrays using standard Python iteration, it will provide an iterator object. I'm always good for fresh ideas, although they nearly never get accepted, but here is another cent: I want to save you calculating the mutual annihilation for several light sources again and again. BEFORE SHADING 1. For the mesh, calculate all edges. Put their unique tuples in a list. Store together with the unique tuple from what triangle they originate and what the original indices were. When you use the sorted indices as unique value, the original indices will be virtually identical with the unique tuple. class EdgeOfTriangle: def __init__(self, unique, vindices, triangleindex): self.unique = unique self.vindices = vindices self.triangleindex = triangleindex def __hash__(self): # For the "in" statement return hash(self.unique) def __eq__(self, other): return other == self.triangle edges = [] for triangle_descriptor in triangle_descriptors: (...) egde1 = EdgeOfTriangle(unique, vindices1, triangle_descriptor.nindex) edge2 = EdgeOfTriangle(unique, vindices2, traingle_descriptor.nindex) (and so on) for new_egde in (edge1, egde2, egde3): if new_edge not in edges: edges.append(new_edge) edges = numpy.asarray(egdes) 2. Make an ndarray matrix from numpy.zeros(len(edges), len(triangles)). Iterate through the list of EdgeOfTriangle objects created in step (1.) and put an offset +1 in the elements [edge_of_triangle.index, edge_of_triangle.triangle], ..., [egde_of_triangle.edge3, edge_of_triangle.triange]. Now, this matrix tells you how often *all* edges will be selected given the back-facing triangles. The matrix shall be called impact_matrix. Virtually do this by: edges_reshaped = edges.reshape((len(edges), 1)) triangles = numpy.arange(0, len(n_array)).reshape((1, len(n_array))) # Employs __eq__ above in EdgeOfTriangle: impact_matrix = (egdes_reshaped == triangles) DURING SHADING 3. Calculate the dot product with my old method. It gives you a list of floats. 4. Calculate selected = numpy.dot(impact_matrix, dotproducts > 0). It gives you precisely the number of times each edge is selected. selected = (numpy.dot(impact_matrix, dotproducts > 0) == 1) 5. Calculate edge_array[selected == 1], when edge_array is the array holding *all* edges occuring (built during step (1.). It will give you the vertex indices of the edges comprising the silhuette. def getvindices(edge_of_triangle): return edge_of_triangle.vindices ugetvindices = numpy.vectorize(getvindices) silhuette_edges_of_triangles = egdes[selected] silhuette_vindices = numpy.asarray(ugetvindices(silhuette_edges_of_triangles)) silhuette_vertices = v_array[silhuette_vindices] Checked everything only *once to twice* this time, fwiw, Friedrich