Found 15 posts.
Search results Show results as topic list.
Technical Discussion » How to easily export many rbd objects from a dop simulation such that they are considered separate objects in a single .bgeo upon import into a new scene?
- yamanash
- 15 posts
- Offline
Through some further experimentation I discovered the rbd packed geo node, this is essentially what I required to accomplish the result I am after and as a bonus it does not require me to split the different groups up into different parts to be added to the dop sim.
Technical Discussion » How to easily export many rbd objects from a dop simulation such that they are considered separate objects in a single .bgeo upon import into a new scene?
- yamanash
- 15 posts
- Offline
Hey guys and gals sorry if this is kind of a weird question, I am still very new to Houdini, just picking it up here an there when I have time. Anyways, Houdini masters, I have a scene in which many pieces of debris in an alembic archive react to the ground cracking in a dop. The aim here is to use the dop to first let the debris fall naturally onto the ground so there are no hovering objects, export the last frame of that part of the sim (with natural object positioning-no hovering) as a .bgeo, then bring that scene back into Houdini and start a new sim using the positions of the debris generated by the first. The problem I am encountering is that I cannot get the different debris to be considered different objects in the new scene I create for resuming the sim with the new positions. I am saving the .bgeo out with a dopimport node inside a dedicated geo node. Another method that occurred to me is writing the first sim out to disk (.sim), reading it back in, and using those positions to dictate the beginning of a separate dop sim… I am really not sure which is the more correct method to go with here. I'm sure positioning debris with a dop is a common task, so please, anyone with experience in this area, some guidance would be much appreciated to a total Houdini nub!
Thanks people!
Thanks people!
Houdini Lounge » HQueue on Windows, network permission errors
- yamanash
- 15 posts
- Offline
I have managed to get a farm (3 machines 24cores) up and running with HQueue on windows 10 and windows 7. Disable UAC, make sure the account the server service and client service run under has ownership and full permissions to the shared hq folder/the H: mount. Otherwise just follow the manual provided by sidefx I guess.
A side note, I am having an issue where although my machines take and execute a job, they rarely use much cpu while doing it. It is much slower to run a sim on 3 machines than it is to run it on one for some reason… I was thinking maybe its a network throughput bottle neck… but task manager doesn't show and insane amount of traffic happening on my LAN… Any ideas?
-Kyle
A side note, I am having an issue where although my machines take and execute a job, they rarely use much cpu while doing it. It is much slower to run a sim on 3 machines than it is to run it on one for some reason… I was thinking maybe its a network throughput bottle neck… but task manager doesn't show and insane amount of traffic happening on my LAN… Any ideas?
-Kyle
Technical Discussion » HQueue sim CPU usage...
- yamanash
- 15 posts
- Offline
Hello, this issue is killing me…I have 3 clients running, a 12 core windows 10 machine and two win7 6 core machines. I can get HQueue to take a job and distribute it properly. The job will even successfully complete, given enough time. However, each machine only seems to use a single thread to execute the sliced sim. What is up with that? Is it a bug, or am I missing a setting somewhere for something like “Max CPU”? This is using the latest build of H15.
Thanks!
Thanks!
Technical Discussion » HQueue Problem
- yamanash
- 15 posts
- Offline
Technical Discussion » HQueue write issue?
- yamanash
- 15 posts
- Offline
Found the issue. “\\KYLE-PC\hq ” in my server .ini should have used forward slashes instead. That's what I get for writing it at 3 am I guess :roll:
Technical Discussion » HQueue write issue?
- yamanash
- 15 posts
- Offline
Hey guys, I am having a heck of a time getting my farm setup for my school finals. I have three machines I need to run sliced fluid sims on. Right now I am just trying to get the main workstation to complete a HQueue job… So I have HQueue client and server installed on this machine on the C drive. Both services run fine under another admin account I created called HQueue. I Have used the shelf tool for creating a sliced sim (pyro sim in this case) as recommended per the HQueue documentation. The shared folder with a houdini install in it is on another disk called F in the workstation, the specific folder is shared as hq to all other machines and is mounted on all of them as H: I have no problems accessing it from any machine. The server's .ini file has been setup with the server ip of the pc it is running on (the workstation), and these lines have been set:
hqserver.sharedNetwork.path.windows = \\KYLE-PC\hq
hqserver.sharedNetwork.mount.windows = H:
Everything else in there is vanilla.
The problem seems to be in writing the slice files to the mounted H: drive, as I get this error when I submit the houdini file I have attached:
hqlib.callFunctionWithHQParms(hqlib.simulateSlice)
File “\\KYLE-PC\hq\houdini_distros\hfs.windows-x86_64\houdini\scripts\hqueue\hqlib.py”, line 1864, in callFunctionWithHQParms
return function(**kwargs)
File “\\KYLE-PC\hq\houdini_distros\hfs.windows-x86_64\houdini\scripts\hqueue\hqlib.py”, line 1532, in simulateSlice
_renderRop(rop)
File “\\KYLE-PC\hq\houdini_distros\hfs.windows-x86_64\houdini\scripts\hqueue\hqlib.py”, line 1869, in _renderRop
rop.render(*args, **kwargs)
File “//KYLE-PC/hq/houdini_distros/hfs.windows-x86_64/houdini/python2.7libs\hou.py”, line 32411, in render
return _hou.RopNode_render(*args, **kwargs)
hou.OperationFailed: The attempted operation failed.
Error: Failed to save output to file “Hprojects/geo/untitled.loadslices.1.bgeo.sc”.
Error: Failed to save output to file “Hprojects/geo/untitled.loadslices.2.bgeo.sc”.
I am really not sure why this is happening as I think I have all the relevant permissions.
Any suggestions peeps?
-Kyle
Here is the diagonostics ouput too:
Diagnostic Information for Job 75:
==================================
Job Name: Simulate -> HIP: untitled.hip ROP: save_slices (Slice 0)
Submitted By: Kyle
Job ID: 75
Parent Job ID(s): 73, 76
Number of Clients Assigned: 1
Job Status: failed
Report Generated On: December 12, 2015 01:52:08 AM
Job Properties:
===============
Description: None
Tries Left: 0
Priority: 5
Minimum Number of Hosts: 1
Maximum Number of Hosts: 1
Tags: single
Queue Time: December 12, 2015 01:15:04 AM
Runnable Time: December 12, 2015 01:46:19 AM
Command Start Time: December 12, 2015 01:50:04 AM
Command End Time:
Start Time: December 12, 2015 01:50:04 AM
End Time: December 12, 2015 01:50:18 AM
Time to Complete: 13s
Time in Queue: 35m 00s
Job Environment Variables:
==========================
HQCOMMANDS:
{
“hythonCommandsLinux”: “export HOUDINI_PYTHON_VERSION=2.7 && export HFS=\”$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\“ && cd $HFS && source ./houdini_setup && hython -u”,
“pythonCommandsMacosx”: “export HFS=\”$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\“ && $HFS/Frameworks/Python.framework/Versions/2.7/bin/python”,
“pythonCommandsLinux”: “export HFS=\”$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\“ && $HFS/python/bin/python2.7”,
“pythonCommandsWindows”: “(set HFS=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!) && \”!HFS!\\python27\\python2.7.exe\“”,
“mantraCommandsLinux”: “export HFS=\”$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\“ && cd $HFS && source ./houdini_setup && $HFS/python/bin/python2.7 $HFS/houdini/scripts/hqueue/hq_mantra.py”,
“mantraCommandsMacosx”: “export HFS=\”$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\“ && cd $HFS && source ./houdini_setup && $HFS/Frameworks/Python.framework/Versions/2.7/bin/python $HFS/houdini/scripts/hqueue/hq_mantra.py”,
“hythonCommandsMacosx”: “export HOUDINI_PYTHON_VERSION=2.7 && export HFS=\”$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\“ && cd $HFS && source ./houdini_setup && hython -u”,
“hythonCommandsWindows”: “(set HOUDINI_PYTHON_VERSION=2.7) && (set HFS=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!) && (set PATH=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!\\bin;!PATH!) && \”!HFS!\\bin\\hython\“ -u”,
“mantraCommandsWindows”: “(set HFS=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!) && \”!HFS!\\python27\\python2.7.exe\“ \”!HFS!\\houdini\\scripts\\hqueue\\hq_mantra.py\“”
}
HQPARMS:
{
“controls_node”: “/obj/pyro_sim/DISTRIBUTE_pyro_CONTROLS”,
“dirs_to_create”: [
“$HIP/geo”
],
“tracker_port”: 54534,
“hip_file”: “$HQROOT/projects/untitled.hip”,
“output_driver”: “/obj/distribute_pyro/save_slices”,
“enable_perf_mon”: 0,
“slice_divs”: [
1,
1,
1
],
“tracker_host”: “KYLE-PC”,
“slice_num”: 0,
“slice_type”: “volume”
}
HQHOSTS:
KYLE-PC
Job Conditions and Requirements:
================================
hostname any KYLE-PC
Executed Client Job Commands:
=============================
Windows Command:
(set HOUDINI_PYTHON_VERSION=2.7) && (set HFS=!HQROOT!\houdini_distros\hfs.!HQCLIENTARCH!) && (set PATH=!HQROOT!\houdini_distros\hfs.!HQCLIENTARCH!\bin;!PATH!) && “!HFS!\bin\hython” -u “!HFS!\houdini\scripts\hqueue\hq_sim_slice.py”
Client Machine Specification (KYLE-PC):
=======================================
DNS Name: KYLE-PC
Client ID: 1
Operating System: windows
Architecture: x86_64
Number of CPUs: 24
CPU Speed: 4000.0
Memory: 25156780
Client Machine Configuration File Contents (KYLE-PC):
=====================================================
server = KYLE-PC
port = 5000
sharedNetwork.mount = \\KYLE-PC\hq
HQueue Server Configuration File Contents:
==========================================
#
# hqserver - Pylons configuration
#
# The %(here)s variable will be replaced with the parent directory of this file
#
email_to = you@yourdomain.com
smtp_server = localhost
error_email_from = paste@localhost
use = eggaste#http
host = 0.0.0.0
port = 5000
# The shared network.
hqserver.sharedNetwork.host = KYLE-PC
hqserver.sharedNetwork.path.linux = %(here)s/shared
hqserver.sharedNetwork.path.windows = \\KYLE-PC\hq
hqserver.sharedNetwork.path.macosx = %(here)s/HQShared
hqserver.sharedNetwork.mount.linux = /mnt/hq
hqserver.sharedNetwork.mount.windows = H:
hqserver.sharedNetwork.mount.macosx = /Volumes/HQShared
# Server port number.
hqserver.port = 5000
# Where to save job output
job_logs_dir = %(here)s/job_logs
# Specify the database for SQLAlchemy to use
sqlalchemy.default.url = sqlite//%(here)s/db/hqserver.db
# This is required if using mysql
sqlalchemy.default.pool_recycle = 3600
# This will force a thread to reuse connections.
sqlalchemy.default.strategy = threadlocal
#########################################################################
# Uncomment these configuration values if you are using a MySQL database.
#########################################################################
# The maximum number of database connections available in the
# connection pool. If you see “QueuePool limit of size” messages
# in the errors.log, then you should increase the value of pool_size.
# This is typically done for farms with a large number of client machines.
#sqlalchemy.default.pool_size = 30
#sqlalchemy.default.max_overflow = 20
# Where to publish myself in avahi
# hqnode will use this to connect
publish_url = http://hostname.domain.com:5000 [hostname.domain.com]
# How many minutes before a client is considered inactive
hqserver.activeTimeout = 3
# How many days before jobs are deleted
hqserver.expireJobsDays = 10
# The maximum number of jobs (under the same root parent job) that can fail on
# a single client before a condition is dynamically added to that root parent
# job (and recursively all its children) that excludes the client from ever
# running this job/these jobs again. This value should be a postive integer
# greater than zero. To disable this feature, set this value to zero.
hqserver.maxFailsAllowed = 5
# The priority that the ‘upgrade’ job gets.
hqserver.upgradePriority = 100
use = egg:hqserver
full_stack = True
cache_dir = %(here)s/data
beaker.session.key = hqserver
beaker.session.secret = somesecret
app_instance_uuid = {fa64a6d1-ae3f-43c1-8141-9c29fdd9d418}
# Logging Setup
keys = root
keys = console
keys = generic
# Change to “level = DEBUG” to see debug messages in the log.
level = INFO
handlers = console
# This handler backs up the log when it reaches 10Mb
# and keeps at most 5 backup copies.
class = handlers.RotatingFileHandler
args = (“hqserver.log”, “a”, 10485760, 5)
level = NOTSET
formatter = generic
format = %(asctime)s %(levelname)-5.5s %(message)s
datefmt = %B %d, %Y %H:%M:%S
Job Status Log:
===============
December 12, 2015 01:15:04 AM: Assigned to KYLE-PC (master)
December 12, 2015 01:15:10 AM: setting status to running
December 12, 2015 01:15:23 AM: setting status to failed
December 12, 2015 01:18:28 AM: Rescheduling…
December 12, 2015 01:18:28 AM: setting status to runnable
December 12, 2015 01:18:28 AM: Assigned to KYLE-PC (master)
December 12, 2015 01:18:35 AM: setting status to running
December 12, 2015 01:18:47 AM: setting status to failed
December 12, 2015 01:23:18 AM: setting status to runnable
December 12, 2015 01:23:19 AM: Assigned to KYLE-PC (master)
December 12, 2015 01:23:20 AM: setting status to running
December 12, 2015 01:23:33 AM: setting status to failed
December 12, 2015 01:29:44 AM: setting status to runnable
December 12, 2015 01:29:44 AM: Assigned to KYLE-PC (master)
December 12, 2015 01:29:44 AM: setting status to running
December 12, 2015 01:29:57 AM: setting status to failed
December 12, 2015 01:34:17 AM: setting status to runnable
December 12, 2015 01:34:17 AM: Assigned to KYLE-PC (master)
December 12, 2015 01:38:17 AM: setting status to abandoned
December 12, 2015 01:46:19 AM: setting status to runnable
December 12, 2015 01:50:04 AM: Assigned to KYLE-PC (master)
December 12, 2015 01:50:04 AM: setting status to running
December 12, 2015 01:50:18 AM: setting status to failed
hqserver.sharedNetwork.path.windows = \\KYLE-PC\hq
hqserver.sharedNetwork.mount.windows = H:
Everything else in there is vanilla.
The problem seems to be in writing the slice files to the mounted H: drive, as I get this error when I submit the houdini file I have attached:
hqlib.callFunctionWithHQParms(hqlib.simulateSlice)
File “\\KYLE-PC\hq\houdini_distros\hfs.windows-x86_64\houdini\scripts\hqueue\hqlib.py”, line 1864, in callFunctionWithHQParms
return function(**kwargs)
File “\\KYLE-PC\hq\houdini_distros\hfs.windows-x86_64\houdini\scripts\hqueue\hqlib.py”, line 1532, in simulateSlice
_renderRop(rop)
File “\\KYLE-PC\hq\houdini_distros\hfs.windows-x86_64\houdini\scripts\hqueue\hqlib.py”, line 1869, in _renderRop
rop.render(*args, **kwargs)
File “//KYLE-PC/hq/houdini_distros/hfs.windows-x86_64/houdini/python2.7libs\hou.py”, line 32411, in render
return _hou.RopNode_render(*args, **kwargs)
hou.OperationFailed: The attempted operation failed.
Error: Failed to save output to file “Hprojects/geo/untitled.loadslices.1.bgeo.sc”.
Error: Failed to save output to file “Hprojects/geo/untitled.loadslices.2.bgeo.sc”.
I am really not sure why this is happening as I think I have all the relevant permissions.
Any suggestions peeps?
-Kyle
Here is the diagonostics ouput too:
Diagnostic Information for Job 75:
==================================
Job Name: Simulate -> HIP: untitled.hip ROP: save_slices (Slice 0)
Submitted By: Kyle
Job ID: 75
Parent Job ID(s): 73, 76
Number of Clients Assigned: 1
Job Status: failed
Report Generated On: December 12, 2015 01:52:08 AM
Job Properties:
===============
Description: None
Tries Left: 0
Priority: 5
Minimum Number of Hosts: 1
Maximum Number of Hosts: 1
Tags: single
Queue Time: December 12, 2015 01:15:04 AM
Runnable Time: December 12, 2015 01:46:19 AM
Command Start Time: December 12, 2015 01:50:04 AM
Command End Time:
Start Time: December 12, 2015 01:50:04 AM
End Time: December 12, 2015 01:50:18 AM
Time to Complete: 13s
Time in Queue: 35m 00s
Job Environment Variables:
==========================
HQCOMMANDS:
{
“hythonCommandsLinux”: “export HOUDINI_PYTHON_VERSION=2.7 && export HFS=\”$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\“ && cd $HFS && source ./houdini_setup && hython -u”,
“pythonCommandsMacosx”: “export HFS=\”$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\“ && $HFS/Frameworks/Python.framework/Versions/2.7/bin/python”,
“pythonCommandsLinux”: “export HFS=\”$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\“ && $HFS/python/bin/python2.7”,
“pythonCommandsWindows”: “(set HFS=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!) && \”!HFS!\\python27\\python2.7.exe\“”,
“mantraCommandsLinux”: “export HFS=\”$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\“ && cd $HFS && source ./houdini_setup && $HFS/python/bin/python2.7 $HFS/houdini/scripts/hqueue/hq_mantra.py”,
“mantraCommandsMacosx”: “export HFS=\”$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\“ && cd $HFS && source ./houdini_setup && $HFS/Frameworks/Python.framework/Versions/2.7/bin/python $HFS/houdini/scripts/hqueue/hq_mantra.py”,
“hythonCommandsMacosx”: “export HOUDINI_PYTHON_VERSION=2.7 && export HFS=\”$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\“ && cd $HFS && source ./houdini_setup && hython -u”,
“hythonCommandsWindows”: “(set HOUDINI_PYTHON_VERSION=2.7) && (set HFS=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!) && (set PATH=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!\\bin;!PATH!) && \”!HFS!\\bin\\hython\“ -u”,
“mantraCommandsWindows”: “(set HFS=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!) && \”!HFS!\\python27\\python2.7.exe\“ \”!HFS!\\houdini\\scripts\\hqueue\\hq_mantra.py\“”
}
HQPARMS:
{
“controls_node”: “/obj/pyro_sim/DISTRIBUTE_pyro_CONTROLS”,
“dirs_to_create”: [
“$HIP/geo”
],
“tracker_port”: 54534,
“hip_file”: “$HQROOT/projects/untitled.hip”,
“output_driver”: “/obj/distribute_pyro/save_slices”,
“enable_perf_mon”: 0,
“slice_divs”: [
1,
1,
1
],
“tracker_host”: “KYLE-PC”,
“slice_num”: 0,
“slice_type”: “volume”
}
HQHOSTS:
KYLE-PC
Job Conditions and Requirements:
================================
hostname any KYLE-PC
Executed Client Job Commands:
=============================
Windows Command:
(set HOUDINI_PYTHON_VERSION=2.7) && (set HFS=!HQROOT!\houdini_distros\hfs.!HQCLIENTARCH!) && (set PATH=!HQROOT!\houdini_distros\hfs.!HQCLIENTARCH!\bin;!PATH!) && “!HFS!\bin\hython” -u “!HFS!\houdini\scripts\hqueue\hq_sim_slice.py”
Client Machine Specification (KYLE-PC):
=======================================
DNS Name: KYLE-PC
Client ID: 1
Operating System: windows
Architecture: x86_64
Number of CPUs: 24
CPU Speed: 4000.0
Memory: 25156780
Client Machine Configuration File Contents (KYLE-PC):
=====================================================
server = KYLE-PC
port = 5000
sharedNetwork.mount = \\KYLE-PC\hq
HQueue Server Configuration File Contents:
==========================================
#
# hqserver - Pylons configuration
#
# The %(here)s variable will be replaced with the parent directory of this file
#
email_to = you@yourdomain.com
smtp_server = localhost
error_email_from = paste@localhost
use = eggaste#http
host = 0.0.0.0
port = 5000
# The shared network.
hqserver.sharedNetwork.host = KYLE-PC
hqserver.sharedNetwork.path.linux = %(here)s/shared
hqserver.sharedNetwork.path.windows = \\KYLE-PC\hq
hqserver.sharedNetwork.path.macosx = %(here)s/HQShared
hqserver.sharedNetwork.mount.linux = /mnt/hq
hqserver.sharedNetwork.mount.windows = H:
hqserver.sharedNetwork.mount.macosx = /Volumes/HQShared
# Server port number.
hqserver.port = 5000
# Where to save job output
job_logs_dir = %(here)s/job_logs
# Specify the database for SQLAlchemy to use
sqlalchemy.default.url = sqlite//%(here)s/db/hqserver.db
# This is required if using mysql
sqlalchemy.default.pool_recycle = 3600
# This will force a thread to reuse connections.
sqlalchemy.default.strategy = threadlocal
#########################################################################
# Uncomment these configuration values if you are using a MySQL database.
#########################################################################
# The maximum number of database connections available in the
# connection pool. If you see “QueuePool limit of size” messages
# in the errors.log, then you should increase the value of pool_size.
# This is typically done for farms with a large number of client machines.
#sqlalchemy.default.pool_size = 30
#sqlalchemy.default.max_overflow = 20
# Where to publish myself in avahi
# hqnode will use this to connect
publish_url = http://hostname.domain.com:5000 [hostname.domain.com]
# How many minutes before a client is considered inactive
hqserver.activeTimeout = 3
# How many days before jobs are deleted
hqserver.expireJobsDays = 10
# The maximum number of jobs (under the same root parent job) that can fail on
# a single client before a condition is dynamically added to that root parent
# job (and recursively all its children) that excludes the client from ever
# running this job/these jobs again. This value should be a postive integer
# greater than zero. To disable this feature, set this value to zero.
hqserver.maxFailsAllowed = 5
# The priority that the ‘upgrade’ job gets.
hqserver.upgradePriority = 100
use = egg:hqserver
full_stack = True
cache_dir = %(here)s/data
beaker.session.key = hqserver
beaker.session.secret = somesecret
app_instance_uuid = {fa64a6d1-ae3f-43c1-8141-9c29fdd9d418}
# Logging Setup
keys = root
keys = console
keys = generic
# Change to “level = DEBUG” to see debug messages in the log.
level = INFO
handlers = console
# This handler backs up the log when it reaches 10Mb
# and keeps at most 5 backup copies.
class = handlers.RotatingFileHandler
args = (“hqserver.log”, “a”, 10485760, 5)
level = NOTSET
formatter = generic
format = %(asctime)s %(levelname)-5.5s %(message)s
datefmt = %B %d, %Y %H:%M:%S
Job Status Log:
===============
December 12, 2015 01:15:04 AM: Assigned to KYLE-PC (master)
December 12, 2015 01:15:10 AM: setting status to running
December 12, 2015 01:15:23 AM: setting status to failed
December 12, 2015 01:18:28 AM: Rescheduling…
December 12, 2015 01:18:28 AM: setting status to runnable
December 12, 2015 01:18:28 AM: Assigned to KYLE-PC (master)
December 12, 2015 01:18:35 AM: setting status to running
December 12, 2015 01:18:47 AM: setting status to failed
December 12, 2015 01:23:18 AM: setting status to runnable
December 12, 2015 01:23:19 AM: Assigned to KYLE-PC (master)
December 12, 2015 01:23:20 AM: setting status to running
December 12, 2015 01:23:33 AM: setting status to failed
December 12, 2015 01:29:44 AM: setting status to runnable
December 12, 2015 01:29:44 AM: Assigned to KYLE-PC (master)
December 12, 2015 01:29:44 AM: setting status to running
December 12, 2015 01:29:57 AM: setting status to failed
December 12, 2015 01:34:17 AM: setting status to runnable
December 12, 2015 01:34:17 AM: Assigned to KYLE-PC (master)
December 12, 2015 01:38:17 AM: setting status to abandoned
December 12, 2015 01:46:19 AM: setting status to runnable
December 12, 2015 01:50:04 AM: Assigned to KYLE-PC (master)
December 12, 2015 01:50:04 AM: setting status to running
December 12, 2015 01:50:18 AM: setting status to failed
Technical Discussion » Rbd Driven Pyro issue?
- yamanash
- 15 posts
- Offline
Update: This problem actually seems to persist when using just normal particles as well. I have to manually reset the simulation each time (auto resim flag is checked so I'm not sure why this is necessary?) and then the particles are only emitted for a split second and then stop. Is there some new default setting in H14 that I am unaware of? I can't possible see why sidefx would intentionally do this. Also there is a warning that keeps coming up on the source nodes for both the pyro sims and the regular particle sims, “SOP solver cooked in immediate mode. Cached results incorrect.” I am not sure where to change the cache mode of this solver, but I strongly suspect is is playing a part in this issue I'm having.
Is anyone else out there having similar issues with H14?
Is anyone else out there having similar issues with H14?
Technical Discussion » Rbd Driven Pyro issue?
- yamanash
- 15 posts
- Offline
Hey guys! I am fairly new to Houdini, I recently upgraded to V14 and I seem to be experiencing some difficulties in the seemingly simple task of getting pyro smoke to emit correctly from a simulated Rbd object. The specific issue is that the smoke volume box does not follow the rdb object correctly all the time, even when it does the smoke emission only lasts for brief second and then stops when the source volume for the smoke is still within the bounds of the smoke sim box. This does not seem normal to me and I don't recall ever having this problem with V 13. I will attach an example of what I'm talking about below. I hope someone can spot what I'm doing wrong! :roll:
Thanks in advance!
Thanks in advance!
Technical Discussion » Particle emission from edges in H 13
- yamanash
- 15 posts
- Offline
Hello again, I have been experimenting with different ways of implementing the pop source node in dops in h 13. It has far fewer options for emission sources than the old node used in pop nets. My question is how do I go about emitting particles from geo edges only in a dop? I'm assuming I need to pass the edge data along to the dop explicitly with vex or something but I don't know what sop to use/ what script to write. I am extremely new to this software and I don't have any real means of viewing tutorials currently so go easy
Thanks guys!
Thanks guys!
Technical Discussion » massive loading time between rendered .sim frames?
- yamanash
- 15 posts
- Offline
Technical Discussion » massive loading time between rendered .sim frames?
- yamanash
- 15 posts
- Offline
Technical Discussion » massive loading time between rendered .sim frames?
- yamanash
- 15 posts
- Offline
Hello, I am having some issues with getting houdini to handle .sim files. I can open them up and get them to display the way I want now, however if I want to scrub to a specific frame it takes an age! The further into the animation the longer it takes. It's almost like it's cooking everything out again from scratch. The first few frames load reasonably quickly but anything after that is absurd. I literally just spent 35 min opening a file with nothing in it but a dop node containing a file node to read in my .sim file and a geo node with a dop I/o… Is this normal? It seems a little ludicrous to me. I have 12 cores@ 4.2GHz and 48gb of ram working on a simpleish smoke and particle sim.
Thanks in advance for your posts
Thanks in advance for your posts
Technical Discussion » writing/reading .sim file issue
- yamanash
- 15 posts
- Offline
Figured it out last night… I needed to import the .sim output into a geometry node and select just the fields I wanted to render… I literally have no interwebs or tutorials to help with things like this
Technical Discussion » writing/reading .sim file issue
- yamanash
- 15 posts
- Offline
Hello, I just started learning houdini a few days ago. I have a sim consisting of pyro smoke and some particles affected by the smoke. At the end of the sim is a file node that writes each frame to disk. Upon reading the sim back into a fresh scene everything seems to be okay but after the first frame there is a box around the simulation with the same dimensions as the smoke's bounding box. This box obscures the simulation both in viewports and in render… how do I eliminate this unwanted geo?
Thanks!
Thanks!
-
- Quick Links