Hello guys.
I hope you are doing well these days.
I decide to read and implement scientific papers (specifically Siggraph ones) that I'm interested in various topics such as medial axis, straight skeletonization, twoway coupling between solid and fluid, various fluid implementations like stateoftheart smoothed particle hydrodynamics, and many more...
Many of Houdini's features are based on these papers such as grain, vellum, FLIP, APIC, narrow band, etc.
The problem that I'm facing now is how to understand these papers for implementation and not just to know what's going on generally.
I showed some of these papers to experienced university professors who had a degree in mathematics or engineering but in the end, they refused to follow up for further investigation or information.
Now my question is why these papers are so concise that even university professors reject to follow them up due to the energy and time it takes to comprehend these topics.
For whom are these papers written?
How can the scientific community accept such conciseness (besides validating that they even work with the quality shown) in these papers despite the fact that professionals need to put so much effort to comprehend them?
Are there any resources so that the complete form of scientific papers is available?
I would greatly be appreciated it if anyone can shine a light on this topic.
Thank you so much.
Scientific Paper Implementation in Houdini (Suggestions and Guidelines)
1272 6 2 NG
 Member
 207 posts
 Joined: March 2018
 Offline
 viklc
 Member
 105 posts
 Joined: May 2017
 Offline
This might be interesting Nerd Rant: Math & Papers [www.youtube.com]. Moritz and Manuel implement algorithms from papers in their tutorials from time to time. But I'm not too deep in the subject to give you advice here, even if what you say is plausible. All I can say is that understanding such papers and being able to implement them are two essential things, I think often one of them fails  finding practical causes is often difficult.
Good luck and success.
Good luck and success.
 Digipiction
 Member
 166 posts
 Joined: March 2014
 Offline
I've tried and failed at doing this numerous times in the past. Usually the papers end up referencing other papers that contain information you need to know, skipping steps, discussing things in an overly mathematical fashion etc.
Sometimes the concepts are really really easy, but instead of explaining them in a brief sentence they're presented as a crazy equation.
How a normal person might describe it:
We have a 3d cylinder. We loop over all the points, find each point's neighbors and average out those positions. We assign the newly averaged position to the point, thus smoothing the entire mesh.
The way a paper might describe it:
Let V = {v_i} be the set of vertices in the 3D cylindrical mesh, where i = 1, 2, ..., n, and n is the total number of vertices. For each vertex v_i, let N(v_i) be the set of neighboring vertices. Our mesh smoothing algorithm can be described as follows:
For each vertex v_i in V, compute the centroid C_i of the neighboring vertices:
C_i = (1/N(v_i)) * Σ_{j ∈ N(v_i)} v_j
where N(v_i) is the cardinality of the set N(v_i), representing the number of neighboring vertices for v_i.
Update the position of each vertex v_i with the computed centroid:
v_i_new = C_i
Repeat steps 1 and 2 for a predefined number of iterations or until a convergence criterion is met.
The proposed algorithm redistributes the vertex positions by calculating the average of their neighboring vertices, effectively smoothing the mesh and improving its overall quality. This iterative process can be tuned according to specific requirements, such as the desired level of mesh homogeneity or an acceptable error threshold.
Sometimes the concepts are really really easy, but instead of explaining them in a brief sentence they're presented as a crazy equation.
How a normal person might describe it:
We have a 3d cylinder. We loop over all the points, find each point's neighbors and average out those positions. We assign the newly averaged position to the point, thus smoothing the entire mesh.
The way a paper might describe it:
Let V = {v_i} be the set of vertices in the 3D cylindrical mesh, where i = 1, 2, ..., n, and n is the total number of vertices. For each vertex v_i, let N(v_i) be the set of neighboring vertices. Our mesh smoothing algorithm can be described as follows:
For each vertex v_i in V, compute the centroid C_i of the neighboring vertices:
C_i = (1/N(v_i)) * Σ_{j ∈ N(v_i)} v_j
where N(v_i) is the cardinality of the set N(v_i), representing the number of neighboring vertices for v_i.
Update the position of each vertex v_i with the computed centroid:
v_i_new = C_i
Repeat steps 1 and 2 for a predefined number of iterations or until a convergence criterion is met.
The proposed algorithm redistributes the vertex positions by calculating the average of their neighboring vertices, effectively smoothing the mesh and improving its overall quality. This iterative process can be tuned according to specific requirements, such as the desired level of mesh homogeneity or an acceptable error threshold.
Edited by Digipiction  March 27, 2023 13:25:24
 alexwheezy
 Member
 153 posts
 Joined: Jan. 2013
 Offline
 animatrix_
 Member
 4224 posts
 Joined: Feb. 2012
 Offline
That's why there is not a single skill as being able to understand any white paper. It all depends on the subject matter, algorithm and the paper at hand.
I find it best when the paper has some code or at the very least pseudocode associated with it.
I find it best when the paper has some code or at the very least pseudocode associated with it.
Senior FX TD @ Industrial Light & Magic
Get to the NEXT level in Houdini & VEX with Pragmatic VEX! [www.pragmaticvfx.com]
youtube.com/@pragmaticvfx  patreon.com/animatrix  animatrix2k7.gumroad.com
Get to the NEXT level in Houdini & VEX with Pragmatic VEX! [www.pragmaticvfx.com]
youtube.com/@pragmaticvfx  patreon.com/animatrix  animatrix2k7.gumroad.com
 tshead2
 Member
 19 posts
 Joined: Aug. 2018
 Offline
NG:
Rightlyorwrongly, computer science papers are often judged on whether they "can be implemented by the average graduate student in the field", which mayormaynot describe your circumstances. The mathematical notation is intended to capture highlevel ideas without tying them to the language / OS / programming paradigm of the month. If you stick with it long enough, it starts to make more sense.
Cheers,
Tim
Rightlyorwrongly, computer science papers are often judged on whether they "can be implemented by the average graduate student in the field", which mayormaynot describe your circumstances. The mathematical notation is intended to capture highlevel ideas without tying them to the language / OS / programming paradigm of the month. If you stick with it long enough, it starts to make more sense.
Cheers,
Tim
 NG
 Member
 207 posts
 Joined: March 2018
 Offline
vik_lc
This might be interesting Nerd Rant: Math & Papers [www.youtube.com]. Moritz and Manuel implement algorithms from papers in their tutorials from time to time. But I'm not too deep in the subject to give you advice here, even if what you say is plausible. All I can say is that understanding such papers and being able to implement them are two essential things, I think often one of them fails  finding practical causes is often difficult.
Good luck and success.
Thanks a lot I really appreciate it.
Digipiction
I've tried and failed at doing this numerous times in the past. Usually the papers end up referencing other papers that contain information you need to know, skipping steps, discussing things in an overly mathematical fashion etc.
Sometimes the concepts are really really easy, but instead of explaining them in a brief sentence they're presented as a crazy equation.
How a normal person might describe it:
We have a 3d cylinder. We loop over all the points, find each point's neighbors and average out those positions. We assign the newly averaged position to the point, thus smoothing the entire mesh.
The way a paper might describe it:
Let V = {v_i} be the set of vertices in the 3D cylindrical mesh, where i = 1, 2, ..., n, and n is the total number of vertices. For each vertex v_i, let N(v_i) be the set of neighboring vertices. Our mesh smoothing algorithm can be described as follows:
For each vertex v_i in V, compute the centroid C_i of the neighboring vertices:
C_i = (1/N(v_i)) * Σ_{j ∈ N(v_i)} v_j
where N(v_i) is the cardinality of the set N(v_i), representing the number of neighboring vertices for v_i.
Update the position of each vertex v_i with the computed centroid:
v_i_new = C_i
Repeat steps 1 and 2 for a predefined number of iterations or until a convergence criterion is met.
The proposed algorithm redistributes the vertex positions by calculating the average of their neighboring vertices, effectively smoothing the mesh and improving its overall quality. This iterative process can be tuned according to specific requirements, such as the desired level of mesh homogeneity or an acceptable error threshold.
Yes, this is exactly what I'm talking about.
I'm not sure why they are insisting to write this way!
alexwheezy
It is for this reason that it is not even easy to understand just by reading some article only a few people among many thousands of artists are always doing this. They write new solvers, implement new algorithms and it seems to me that this requires much more than just desire.
Thanks a lot I really appreciate it.
tshead2
NG:
Rightlyorwrongly, computer science papers are often judged on whether they "can be implemented by the average graduate student in the field", which mayormaynot describe your circumstances. The mathematical notation is intended to capture highlevel ideas without tying them to the language / OS / programming paradigm of the month. If you stick with it long enough, it starts to make more sense.
Cheers,
Tim
Another problem is that every paper tries to show the world that their approaches and methods are the best among the others. You can't tell anything about it until you finally test yourself after putting in an endless amount of time to implement them correctly.
animatrix_
That's why there is not a single skill as being able to understand any white paper. It all depends on the subject matter, algorithm and the paper at hand.
I find it best when the paper has some code or at the very least pseudocode associated with it.
Thank you so much dear @animatrix_.
You know, the code is available rarely. Many paper writers decide not to publish their code on the internet.
I remember implementing the lattice Boltzmann method in Houdini and there were a couple of good examples on the internet that made it unnecessary to read such nonsense papers but as soon as you wanted to add more things to it such as interaction with moving obstacles you need to read them again!
Therefore for implementing simple things you need to waste a lot of time reading and comprehending these papers and hope in the end you have something in your hands.
Now, my questions boil down to:
1 Why journals don't stop accepting papers with this format? How do they even convince themselves that the proposed methods work the way shown in the papers?
2 Whose responsibility is to read and understand these papers not just overall but also to implement them?
technical directors? research and developers? software developers? technical artists? etc...
3 Are there any resources to force paper writers to upload their full work with complete validation, verification, and explanation?
4 How do such papers extend with such conciseness whereas no code and further information are shared to put them in a real challenge?
I found many similar papers trying to explain similar topics without proposing anything new!
Looking forward to your answers guys.
Thank you so much.

 Quick Links