Karma error when rendering with Redshift

   3396   5   1
User Avatar
Member
31 posts
Joined: June 2016
Offline
As the title says, when rendering with redshift and husk on the farm we are getting this error - can anybody help with that? (posting it here as it says it's a Karma error - but as we don't render with Karma that seems peculiar)

2021.06.21 16:26:32 |  err: -- TRACEBACK BEGIN --
2021.06.21 16:26:32 | err: Traceback from karma 18.5.462 (Compiled on linux-x86_64-gcc6.3):
2021.06.21 16:26:32 | err: stackTrace() <libHoudiniUT.so>
2021.06.21 16:26:32 | err: signalCallback(UTsignalHandlerArg) <libHoudiniUT.so>
2021.06.21 16:26:32 | err: UT_Signal::UT_ComboSignalHandler::operator()(int, siginfo_t*, void*) const <libHoudiniUT.so>
2021.06.21 16:26:32 | err: UT_Signal::processSignal(int, siginfo_t*, void*) <libHoudiniUT.so>
2021.06.21 16:26:32 | err: _L_unlock_13 <libpthread.so.0>
2021.06.21 16:26:32 | err: RS_CachedMeshFlag_Get(unsigned int&, char const*) <libredshift-core.so>
2021.06.21 16:26:32 | err: RS_Denoise_Altus_DualPass(unsigned int, unsigned int, int, float*, int, float*, float*, int, float*, float*, int, float*, float*, int, float*, float*, float, float, float, float, bool) <libredshift-core.so>
2021.06.21 16:26:32 | err: pxrInternal_v0_20__pxrReserved__::RS_renderContextClass::buildRSMeshObject(pxrInternal_v0_20__pxrReserved__::SdfPath const&, std::map<pxrInternal_v0_20__pxrReserved__::TfToken, pxrInternal_v0_20__pxrReserved__::PrimvarSource, std::less<pxrInternal_v0_20__pxrReserved__::TfToken>, std::allocator<std::pair<pxrInternal_v0_20__pxrReserved__::TfToken const, pxrInternal_v0_20__pxrReserved__::PrimvarSource> > > const&, RSMesh*, pxrInternal_v0_20__pxrReserved__::RS_renderContextClass::rs_geometricData&, bool) <RSHydra.so>
2021.06.21 16:26:32 | err: pxrInternal_v0_20__pxrReserved__::HdRedshiftMesh::Sync(pxrInternal_v0_20__pxrReserved__::HdSceneDelegate*, pxrInternal_v0_20__pxrReserved__::HdRenderParam*, unsigned int*, pxrInternal_v0_20__pxrReserved__::TfToken const&) <RSHydra.so>
2021.06.21 16:26:32 | err: pxrInternal_v0_20__pxrReserved__::(anonymous namespace)::_SyncRPrims::Sync(unsigned long, unsigned long) <libpxr_hd.so>
2021.06.21 16:26:32 | err: void tbb::interface9::internal::dynamic_grainsize_mode<tbb::interface9::internal::adaptive_mode<tbb::interface9::internal::auto_partition_type> >::work_balance<tbb::interface9::internal::start_for<tbb::blocked_range<unsigned long>, void pxrInternal_v0_20__pxrReserved__::WorkParallelForN<std::_Bind<std::_Mem_fn<void (pxrInternal_v0_20__pxrReserved__::(anonymous namespace)::_SyncRPrims::*)(unsigned long, unsigned long)> (pxrInternal_v0_20__pxrReserved__::(anonymous namespace)::_SyncRPrims, std::_Placeholder<1>, std::_Placeholder<2>)> >(unsigned long, std::_Bind<std::_Mem_fn<void (pxrInternal_v0_20__pxrReserved__::(anonymous namespace)::_SyncRPrims::*)(unsigned long, unsigned long)> (pxrInternal_v0_20__pxrReserved__::(anonymous namespace)::_SyncRPrims, std::_Placeholder<1>, std::_Placeholder<2>)>&&, unsigned long)::Work_ParallelForN_TBB, tbb::auto_partitioner const>, tbb::blocked_range<unsigned long> >(tbb::interface9::internal::start_for<tbb::blocked_range<unsigned long>, void pxrInternal_v0_20__pxrReserved__::WorkParallelForN<std::_Bind<std::_Mem_fn<void (pxrInternal_v0_20__pxrReserved__::(anonymous namespace)::_SyncRPrims::*)(unsigned long, unsigned long)> (pxrInternal_v0_20__pxrReserved__::(anonymous namespace)::_SyncRPrims, std::_Placeholder<1>, std::_Placeholder<2>)> >(unsigned long, std::_Bind<std::_Mem_fn<void (pxrInternal_v0_20__pxrReserved__::(anonymous namespace)::_SyncRPrims::*)(unsigned long, unsigned long)> (pxrInternal_v0_20__pxrReserved__::(anonymous namespace)::_SyncRPrims, std::_Placeholder<1>, std::_Placeholder<2>)>&&, unsigned long)::Work_ParallelForN_TBB, tbb::auto_partitioner const>&, tbb::blocked_range<unsigned long>&) <libpxr_hd.so>
2021.06.21 16:26:32 | err: tbb::interface9::internal::start_for<tbb::blocked_range<unsigned long>, void pxrInternal_v0_20__pxrReserved__::WorkParallelForN<std::_Bind<std::_Mem_fn<void (pxrInternal_v0_20__pxrReserved__::(anonymous namespace)::_SyncRPrims::*)(unsigned long, unsigned long)> (pxrInternal_v0_20__pxrReserved__::(anonymous namespace)::_SyncRPrims, std::_Placeholder<1>, std::_Placeholder<2>)> >(unsigned long, std::_Bind<std::_Mem_fn<void (pxrInternal_v0_20__pxrReserved__::(anonymous namespace)::_SyncRPrims::*)(unsigned long, unsigned long)> (pxrInternal_v0_20__pxrReserved__::(anonymous namespace)::_SyncRPrims, std::_Placeholder<1>, std::_Placeholder<2>)>&&, unsigned long)::Work_ParallelForN_TBB, tbb::auto_partitioner const>::execute() <libpxr_hd.so>
2021.06.21 16:26:32 | err: tbb::internal::custom_scheduler<tbb::internal::IntelSchedulerTraits>::process_bypass_loop(tbb::internal::context_guard_helper<false>&, tbb::task*, long) (custom_scheduler.h:469)
2021.06.21 16:26:32 | err: tbb::internal::custom_scheduler<tbb::internal::IntelSchedulerTraits>::local_wait_for_all(tbb::task&, tbb::task*) (custom_scheduler.h:631)
2021.06.21 16:26:32 | err: tbb::internal::generic_scheduler::local_spawn_root_and_wait(tbb::task*, tbb::task*&) (scheduler.cpp:734)
2021.06.21 16:26:32 | err: pxrInternal_v0_20__pxrReserved__::WorkDispatcher::_InvokerTask<std::_Bind<pxrInternal_v0_20__pxrReserved__::HdRenderIndex::SyncAll(std::vector<std::shared_ptr<pxrInternal_v0_20__pxrReserved__::HdTask>, std::allocator<std::shared_ptr<pxrInternal_v0_20__pxrReserved__::HdTask> > >*, std::unordered_map<pxrInternal_v0_20__pxrReserved__::TfToken, pxrInternal_v0_20__pxrReserved__::VtValue, pxrInternal_v0_20__pxrReserved__::TfToken::HashFunctor, std::equal_to<pxrInternal_v0_20__pxrReserved__::TfToken>, std::allocator<std::pair<pxrInternal_v0_20__pxrReserved__::TfToken const, pxrInternal_v0_20__pxrReserved__::VtValue> > >*)::{lambda()#1} ()> >::execute() <libpxr_hd.so>
2021.06.21 16:26:32 | err: tbb::internal::custom_scheduler<tbb::internal::IntelSchedulerTraits>::process_bypass_loop(tbb::internal::context_guard_helper<false>&, tbb::task*, long) (custom_scheduler.h:469)
2021.06.21 16:26:32 | err: tbb::internal::custom_scheduler<tbb::internal::IntelSchedulerTraits>::local_wait_for_all(tbb::task&, tbb::task*) (custom_scheduler.h:631)
2021.06.21 16:26:32 | err: pxrInternal_v0_20__pxrReserved__::WorkDispatcher::Wait() <libpxr_work.so>
2021.06.21 16:26:32 | err: tbb::interface7::internal::task_arena_base::internal_execute(tbb::interface7::internal::delegate_base&) const (arena.cpp:1035)
2021.06.21 16:26:32 | err: pxrInternal_v0_20__pxrReserved__::WorkArenaDispatcher::Wait() <libpxr_work.so>
2021.06.21 16:26:32 | err: pxrInternal_v0_20__pxrReserved__::HdRenderIndex::SyncAll(std::vector<std::shared_ptr<pxrInternal_v0_20__pxrReserved__::HdTask>, std::allocator<std::shared_ptr<pxrInternal_v0_20__pxrReserved__::HdTask> > >*, std::unordered_map<pxrInternal_v0_20__pxrReserved__::TfToken, pxrInternal_v0_20__pxrReserved__::VtValue, pxrInternal_v0_20__pxrReserved__::TfToken::HashFunctor, std::equal_to<pxrInternal_v0_20__pxrReserved__::TfToken>, std::allocator<std::pair<pxrInternal_v0_20__pxrReserved__::TfToken const, pxrInternal_v0_20__pxrReserved__::VtValue> > >*) <libpxr_hd.so>
2021.06.21 16:26:32 | err: pxrInternal_v0_20__pxrReserved__::HdEngine::Execute(pxrInternal_v0_20__pxrReserved__::HdRenderIndex*, std::vector<std::shared_ptr<pxrInternal_v0_20__pxrReserved__::HdTask>, std::allocator<std::shared_ptr<pxrInternal_v0_20__pxrReserved__::HdTask> > >*) <libpxr_hd.so>
2021.06.21 16:26:32 | err: std::runtime_error::what() const <husk>
2021.06.21 16:26:32 | err: std::runtime_error::what() const <husk>
2021.06.21 16:26:32 | err: std::runtime_error::what() const <husk>
2021.06.21 16:26:32 | err: std::runtime_error::what() const <husk>
2021.06.21 16:26:32 | err: __libc_start_main <libc.so.6>
2021.06.21 16:26:32 | err: std::runtime_error::what() const <husk>
2021.06.21 16:26:32 | err: -- TRACEBACK END --
Edited by Marco Dörner - June 21, 2021 12:26:11
User Avatar
Staff
453 posts
Joined: June 2020
Offline
Hi Marco,

That error message (Traceback from karma) should be interpreted as Traceback from husk. The huskapplication was originally called karma(and can still be launched as karma), and some bits of the code (including the branding for the crash handler) still refer to "karma" rather than "husk".

- Rob
Edited by robp_sidefx - June 22, 2021 03:43:55
User Avatar
Member
31 posts
Joined: June 2016
Offline
Thanks Rob!

In that case, any idea what any of those lines mean? : D
The only line I can read anything out of is the one with RS_Denoise_Altus - however this denoiser is not used in our setup.
User Avatar
Staff
453 posts
Joined: June 2020
Offline
Hi Marco,

I suspect the Redshift team will probably be able to make more sense of that stack trace.

- Rob
User Avatar
Member
240 posts
Joined: Oct. 2014
Offline
Hi Marco, as I mentioned on the RS forum, we have encountered this too (or at least something similar), although I can't guarantee the issue is identical. For us, the problem has always been a failure of certain existing nodes (notably RenderSettings LOPs) to properly update once a new build of Redshift is installed. The nodes may visually appear to have updated with whatever changes were new in the latest build (thought not always), but under the hood something is clearly off, as we reliably get this husk error. The solution for us has been to replace old RenderSettings nodes with new ones. I don't know if the failure to properly update is on the RS side or the Houdini side. I don't care for the brute-force solution of delete-and-recreate, but it has worked for us.
- Tim Crowson
Technical/CG Supervisor
User Avatar
Member
7803 posts
Joined: Sept. 2011
Offline
Tim Crowson
The solution for us has been to replace old RenderSettings nodes with new ones. I don't know if the failure to properly update is on the RS side or the Houdini side. I don't care for the brute-force solution of delete-and-recreate, but it has worked for us.

I don't think they would ever update. The parameters are all spares added at node creation time.
  • Quick Links