Found 144 posts.
Search results Show results as topic list.
Technical Discussion » Help is still broken!
- itriix
- 152 posts
- Offline
Technical Discussion » Old vs. New Point SOP
- itriix
- 152 posts
- Offline
Will do.
Yeah - a context-sensitive expression list sounds pretty good too.
Probably something flawed with my logic here but my thoughts were to just have an additional “Attribute/Self Value” parameter field that could represent “self” sitting right under the Constant parameter text boxes.
In the VEXpression box - “self” could then just reference this “Attribute Value” just like “value” references the “Constant Value”. When you change the “Attribute” pull-down menu option, it updates the “pre-defined” values, like @P.x, @P.y, @P.z in the “Attribute/Self Value” parameter fields. Something like in the image below:
I guess this doesn't really solve the original issue with the Point SOP - because it looks like it would just re-introduce the same problems as before. Hmmm… Maybe 3 VEX snippet boxes instead of 3 parameter value boxes? Each pre-set to write to an individual attribute component under the hood?
As for Constant Value - yeah that one seems a bit tricky. Possibly have a toggle for it.
Yeah - a context-sensitive expression list sounds pretty good too.
Probably something flawed with my logic here but my thoughts were to just have an additional “Attribute/Self Value” parameter field that could represent “self” sitting right under the Constant parameter text boxes.
In the VEXpression box - “self” could then just reference this “Attribute Value” just like “value” references the “Constant Value”. When you change the “Attribute” pull-down menu option, it updates the “pre-defined” values, like @P.x, @P.y, @P.z in the “Attribute/Self Value” parameter fields. Something like in the image below:
I guess this doesn't really solve the original issue with the Point SOP - because it looks like it would just re-introduce the same problems as before. Hmmm… Maybe 3 VEX snippet boxes instead of 3 parameter value boxes? Each pre-set to write to an individual attribute component under the hood?
As for Constant Value - yeah that one seems a bit tricky. Possibly have a toggle for it.
Edited by itriix - March 22, 2017 19:48:54
Technical Discussion » Old vs. New Point SOP
- itriix
- 152 posts
- Offline
I'm all about the efficiency improvements and I understand measures needed to be taken to make that happen. For myself and many others, i'm sure it didn't take long to adapt to the new approaches, but my initial response was from the perspective of an educator, teaching students between undergrad to grad level - with widely varying experience in cg.
At the one end of the spectrum, the changes are welcomed, at the other end, a little more technical explanation is now needed to get them up and running. Not that it's a bad thing, it just requires approaching educational material from a different angle. In the end, it's for the benefit of all the Houdini artists.
So, I guess it's time to say goodbye to those we've lost - I surely hope the Point, Copy and Group SOP's got proper burials. It better have been epic! We had some amazing adventures! Out with the old, in with the new! I look forward to the better, brighter future of Houdini.
~ I still retain that the Attribute Expression SOP could use some work.
At the one end of the spectrum, the changes are welcomed, at the other end, a little more technical explanation is now needed to get them up and running. Not that it's a bad thing, it just requires approaching educational material from a different angle. In the end, it's for the benefit of all the Houdini artists.
So, I guess it's time to say goodbye to those we've lost - I surely hope the Point, Copy and Group SOP's got proper burials. It better have been epic! We had some amazing adventures! Out with the old, in with the new! I look forward to the better, brighter future of Houdini.
~ I still retain that the Attribute Expression SOP could use some work.
Edited by itriix - March 22, 2017 13:25:56
Technical Discussion » Help is still broken!
- itriix
- 152 posts
- Offline
My Help Docs are still broken also. This is on a machine, with clean install of windows 10. No modifications to any env variables. Clean install of Houdini.
One thing i've noticed that seems to help (temporarily), is to open Houdini with the Technical Desktop (with a Python Shell pane). It seems like the Help Docs work a little longer this way.
(Without this way - the help docs usually bug out, and either crash Houdini or require Houdini to be restarted in order to use the Help Docs a second time around)
Bug has been submitted a while ago - no updates as of yet though.
One thing i've noticed that seems to help (temporarily), is to open Houdini with the Technical Desktop (with a Python Shell pane). It seems like the Help Docs work a little longer this way.
(Without this way - the help docs usually bug out, and either crash Houdini or require Houdini to be restarted in order to use the Help Docs a second time around)
Bug has been submitted a while ago - no updates as of yet though.
Technical Discussion » Old vs. New Point SOP
- itriix
- 152 posts
- Offline
In the old Point SOP, we could simply put sin(@ptnum) in the PY parameter field. I believe this is a much simpler approach for beginners to get used to expressions and the concept of variables and attributes. Also the old Point SOP had relevant @attributes placed in the parameter fields. This was a great way for new users to see: oh: @P.x, @P.y, @P.z is how you would access the different components of the Position. Or @N.x, @N.y, @N.z for Normals.
If the default is now:
Set Constant Value to: 0, 1, 0
Set VEXpression to: self + value * sin(radians(@ptnum))
That's a lot of additional work - and possible “human error”, just to get a sine wave.
It's beginning to feel much more verbose. It also doesn't have any visual clues as to how you might want to reference a particular attribute (such as position). While, yes, you see Position(P) in the drop-down menu, it doesn't visually show it being used like: @P.x, @P.y. @P.z
While, this workflow is probably fine for older, more seasoned users, who are used to typing expressions, referencing attributes, using variables - newer users - (In my opinion) are not going to find this as simple.
Possible solution:
What if the Attribute Expression SOP included a “self” field - vector or scalar, depending on the attribute. In the fields, are the relevant references - @P.x, @P.y, @P.z for Position for example. This would kind of mimic the “old” Point SOP in that specific respect.
This will have two benefits: 1) Newer users will get some visual reference as to “how” to go about accessing specific attributes. 2) Less verbose - we could just set the self.y field to sin(radians(@ptnum) and call it a day.
Thoughts?
If the default is now:
Set Constant Value to: 0, 1, 0
Set VEXpression to: self + value * sin(radians(@ptnum))
That's a lot of additional work - and possible “human error”, just to get a sine wave.
It's beginning to feel much more verbose. It also doesn't have any visual clues as to how you might want to reference a particular attribute (such as position). While, yes, you see Position(P) in the drop-down menu, it doesn't visually show it being used like: @P.x, @P.y. @P.z
While, this workflow is probably fine for older, more seasoned users, who are used to typing expressions, referencing attributes, using variables - newer users - (In my opinion) are not going to find this as simple.
Possible solution:
What if the Attribute Expression SOP included a “self” field - vector or scalar, depending on the attribute. In the fields, are the relevant references - @P.x, @P.y, @P.z for Position for example. This would kind of mimic the “old” Point SOP in that specific respect.
This will have two benefits: 1) Newer users will get some visual reference as to “how” to go about accessing specific attributes. 2) Less verbose - we could just set the self.y field to sin(radians(@ptnum) and call it a day.
Thoughts?
Edited by itriix - March 17, 2017 17:08:36
Houdini Lounge » H14/15 new pc build
- itriix
- 152 posts
- Offline
5960x or 6700k?
6700k 4.0GHz base, 4-core, 8M cache ($360)
5960x 3GHz, 8-core, 20M cache ($1060)
Can anyone say with confidence one of these will perform better than the other? I'm primarily going to be doing large sims. I know Twod mentioned diminishing returns above 8 threads.
One of these bad boys is going to be paired with 64gb ram, gtx titan x and i'm going to call it a day! So what's the final vote everyone? Thanks for all your time.
6700k 4.0GHz base, 4-core, 8M cache ($360)
5960x 3GHz, 8-core, 20M cache ($1060)
Can anyone say with confidence one of these will perform better than the other? I'm primarily going to be doing large sims. I know Twod mentioned diminishing returns above 8 threads.
One of these bad boys is going to be paired with 64gb ram, gtx titan x and i'm going to call it a day! So what's the final vote everyone? Thanks for all your time.
Houdini Lounge » H14/15 new pc build
- itriix
- 152 posts
- Offline
I'm closing in on my final build. After researching numerous benchmark comparisons, devouring forums and having discussions about possible builds, i've been steered towards a single high clock, high core cpu instead of a low clock, high core dual xeon.
I would like to know if anyone has used or is familiar with the following cpu's and which one would give me the best performance in Houdini.
Intel i7 6700k (Skylake) - Quad Core, 4.0GHz, 8M Cache ($339.00)
Intel i7 5930k - 6 Core, 3.5Ghz, 15M Cache ($594.00)
Intel i7 5960X - 8 Core, 3.0 Ghz, 20M Cache ($1059.00)
Intel Xeon E5-1650v3 - 6 Core, 3.5GHz, 15 M Cache ($586.00)
Intel Xeon E5-1660v3 - 8 Core, 3.0GHz, 20M Cache ($1080.00)
The 6700k, 5930k, 5960x all support 64GB of RAM –> Which I will be getting. The E5-1650v3 and the E5-1660v3 supports 768GB of RAM.
Many people have already said that the i7 6700k can easily be OC'd to a comfortable 4.5 or 4.6GHz. So i'm curious if having such a high clock speed would outweigh it's 4 cores and 8M Cache.
As previously mentioned, this system is going to be an All in One, for doing big sims and rendering (both which are multi-threaded). Is the lower clock, higher core cpus going to actually perform better than the insane 6700k clock with lower core? Also, how much does the L3 cache size help? Is there going to be a huge benefit having say 20M cache vs the 8M Cache?
Thanks everyone! I can't wait to get this thing purchased and built up. I really appreciate any final thoughts before I get this thing going.
I would like to know if anyone has used or is familiar with the following cpu's and which one would give me the best performance in Houdini.
Intel i7 6700k (Skylake) - Quad Core, 4.0GHz, 8M Cache ($339.00)
Intel i7 5930k - 6 Core, 3.5Ghz, 15M Cache ($594.00)
Intel i7 5960X - 8 Core, 3.0 Ghz, 20M Cache ($1059.00)
Intel Xeon E5-1650v3 - 6 Core, 3.5GHz, 15 M Cache ($586.00)
Intel Xeon E5-1660v3 - 8 Core, 3.0GHz, 20M Cache ($1080.00)
The 6700k, 5930k, 5960x all support 64GB of RAM –> Which I will be getting. The E5-1650v3 and the E5-1660v3 supports 768GB of RAM.
Many people have already said that the i7 6700k can easily be OC'd to a comfortable 4.5 or 4.6GHz. So i'm curious if having such a high clock speed would outweigh it's 4 cores and 8M Cache.
As previously mentioned, this system is going to be an All in One, for doing big sims and rendering (both which are multi-threaded). Is the lower clock, higher core cpus going to actually perform better than the insane 6700k clock with lower core? Also, how much does the L3 cache size help? Is there going to be a huge benefit having say 20M cache vs the 8M Cache?
Thanks everyone! I can't wait to get this thing purchased and built up. I really appreciate any final thoughts before I get this thing going.
Houdini Lounge » H14/15 new pc build
- itriix
- 152 posts
- Offline
This would be for my solo work.
The cheaper option with the ability to get a second system if needed is appealing and honestly, if I need to upgrade, then I can just build a new “cheap” system that is more up with the times, and turn the old one into a secondary system.
I just kept thinking the dual xeon, would be a better all-in-one, longer lasting method, but yeah, a more powerful single cpu, maxed out, for less money, is probably the way to go!
Thanks for your thoughts.
The cheaper option with the ability to get a second system if needed is appealing and honestly, if I need to upgrade, then I can just build a new “cheap” system that is more up with the times, and turn the old one into a secondary system.
I just kept thinking the dual xeon, would be a better all-in-one, longer lasting method, but yeah, a more powerful single cpu, maxed out, for less money, is probably the way to go!
Thanks for your thoughts.
Houdini Lounge » H14/15 new pc build
- itriix
- 152 posts
- Offline
Okay so here are a few options i've put together with pcpartpicker:
Dual Intel Xeon E5-2630V3 2.4GHz 8-core:
http://pcpartpicker.com/user/Epheks/saved/fKwv6h [pcpartpicker.com]
Single Intel Xeon E5-1660V3 3.0 GHz 8-core:
http://pcpartpicker.com/user/Epheks/saved/hHCWGX [pcpartpicker.com]
Intel i7 6700K 4.0GHz Quad-Core Skylake:
http://pcpartpicker.com/user/Epheks/saved/Dsd2FT [pcpartpicker.com]
Each system has 64GB of RAM. The main difference is upgradability. In the Dual and Single Xeon setups, there is more room to upgrade in the future, whereas the i7 is basically maxed out.
Any thoughts are welcome! Also, if you notice anything in the parts list that might bottleneck the system, please point that out.
Cheers
Dual Intel Xeon E5-2630V3 2.4GHz 8-core:
http://pcpartpicker.com/user/Epheks/saved/fKwv6h [pcpartpicker.com]
Single Intel Xeon E5-1660V3 3.0 GHz 8-core:
http://pcpartpicker.com/user/Epheks/saved/hHCWGX [pcpartpicker.com]
Intel i7 6700K 4.0GHz Quad-Core Skylake:
http://pcpartpicker.com/user/Epheks/saved/Dsd2FT [pcpartpicker.com]
Each system has 64GB of RAM. The main difference is upgradability. In the Dual and Single Xeon setups, there is more room to upgrade in the future, whereas the i7 is basically maxed out.
Any thoughts are welcome! Also, if you notice anything in the parts list that might bottleneck the system, please point that out.
Cheers
Houdini Lounge » Xeons vs I7, the eternal struggle
- itriix
- 152 posts
- Offline
I've been having the same questions. I'm going in circles debating if I should spend the additional money to create a dual xeon setup or single cpu.
Options i'm considering for dual vs single cpu:
dual xeon e5-2630 v3 2.4Ghz, 8 core cpus
single i7 Skylake or 5960x, or even a higher Ghz e5.
With the dual xeon setup, it will cost more, but have lots of upgradability (in terms of more ram over time)… Will plan on getting 64gb to start. And also, the cpus could get upgraded, when the higher e5s become a bit cheaper.
As for the single setup,
It would be cheaper, but have much less upgradability… Unless however, if I find a dual socket mobo, and only utilize one socket temporarily until I can match the cpu. This would allow for more upgradability as well.
Something like the i7 skylake won't give much options for upgrades because, it can have max of 64gb ram, and it's currently the best version of the processor. Therefore, that system would just have to last, and then buy a whole new system at another time.
What's been really getting at me though is if the dual xeon is really going to give me “THAT MUCH MORE” performance than the single setup. As noted, the single, higher clock cpu, may win out in more single threaded scenarios, whereas the dual xeons, will win in the multithreaded cases.
For an all around computer, for big sims, rendering, and essentially the entire pipeline, it still feels the dual xeon is the better “and longer lasting” workstation to go with. However, I am still looking for more info on that.
Options i'm considering for dual vs single cpu:
dual xeon e5-2630 v3 2.4Ghz, 8 core cpus
single i7 Skylake or 5960x, or even a higher Ghz e5.
With the dual xeon setup, it will cost more, but have lots of upgradability (in terms of more ram over time)… Will plan on getting 64gb to start. And also, the cpus could get upgraded, when the higher e5s become a bit cheaper.
As for the single setup,
It would be cheaper, but have much less upgradability… Unless however, if I find a dual socket mobo, and only utilize one socket temporarily until I can match the cpu. This would allow for more upgradability as well.
Something like the i7 skylake won't give much options for upgrades because, it can have max of 64gb ram, and it's currently the best version of the processor. Therefore, that system would just have to last, and then buy a whole new system at another time.
What's been really getting at me though is if the dual xeon is really going to give me “THAT MUCH MORE” performance than the single setup. As noted, the single, higher clock cpu, may win out in more single threaded scenarios, whereas the dual xeons, will win in the multithreaded cases.
For an all around computer, for big sims, rendering, and essentially the entire pipeline, it still feels the dual xeon is the better “and longer lasting” workstation to go with. However, I am still looking for more info on that.
Houdini Lounge » H14/15 new pc build
- itriix
- 152 posts
- Offline
Thanks guys,
I'll definitely keep this all in mind. Yeah, I definitely felt the Titan X will be the best route in terms of my GPU.
As for RAM, i'm definitely aiming for larger amounts, which is why I was leaning toward the LGA2011-v3 motherboard, dual socket, with xeons, so I can utilize more RAM and have some room for upgrading.
Whereas, if I go with the 5960X, this will be a single CPU machine, and the ram utilization would be less.
Thanks for the thoughts! I will definitely keep this all in mind, while i'm doing my search. Cheers
I'll definitely keep this all in mind. Yeah, I definitely felt the Titan X will be the best route in terms of my GPU.
As for RAM, i'm definitely aiming for larger amounts, which is why I was leaning toward the LGA2011-v3 motherboard, dual socket, with xeons, so I can utilize more RAM and have some room for upgrading.
Whereas, if I go with the 5960X, this will be a single CPU machine, and the ram utilization would be less.
Thanks for the thoughts! I will definitely keep this all in mind, while i'm doing my search. Cheers
Houdini Lounge » H14/15 new pc build
- itriix
- 152 posts
- Offline
Hey everyone,
All of the issues I am now having with OS X and Houdini has pretty much forced a move to build a PC. To begin, I will be specifically interested in doing large dynamics simulations (pyro, flip, grain, bullet, rbd, cloth)
I've read everything I could find on the forums in terms of Houdini specific setups, and I have a few questions, if you have the time.
1st off, i'm willing to spend around $3-5k ~ with lower being more preferable, but the wiggle room to make “worthwhile” parts choices if necessary.
2nd, i'd like (if possible), to build something that is a bit “future-proof” with the ability to upgrade down the road. Especially, if there are cost reduction choices that I could make right now, that I could then upgrade later down the road.
Just to get it out of the way, i'm thinking for the GPU: EVGA GeForce GTX Titan X (down the road I could get a second for SLI)
other options are 2 x GTX 980's or 1 x GTX Titan Z
The CPU is where i'm having the most questions. Here are my thoughts:
i7 6700K Skylake or i7 5960X or (here is where i'm not sure about)… Should I go for an LGA2011-v3 MOBO DUAL Socket, then get 2 x Xeon e5 (lower end), possibly ES 2630 2.4 GHz 8 core and 20 MB Cache. Another option is to get a higher level Xeon e5 for similar price of the 2 x cheaper e5 processors. Then only install it in one socket of a dual socket, (i'll have to make sure I find a MOBO that will not be affected by this). And then, upgrade with a 2nd CPU later on, when I have some more money.
This also opens up the question (more specific to Houdini) and it's performance with Hyper Threading, dual processors, clock speed, cores, and all that.
So does Houdini benefit more from a CPU with higher Ghz and less cores, or lower Ghz and more cores? In particular, related to the basic concept of the CPU's that i'm debating up above (or if there are better choices, please point me to those, thanks!)
Overall my thinking is this, to satisfy the “longevity and upgrade” requirement that i've self imposed, I feel like if I went the dual Xeon route, it will also give me the ability to upgrade MUCH more RAM over time. For the moment I could go 64 GB or similar, then later up to 128+ also, if I went with the “cheaper” dual xeon's, i could upgrade to 2 better ones when prices drop.
On the other side, if I went with something like the new i7 6700K Skylake, it let's you upgrade to max of 64 gb (so that's better than the previous 32 GB limit). But then at that point, it won't really be “upgradeable” anymore, because I could just max that out right now. The only real upgradeability to the system would then be the CPU if they come out with a higher core Skylake later on, that still fits that same MOBO. However, there are no details on that yet.
Well that's about it for now. I appreciate your help. Cheers everyone.
All of the issues I am now having with OS X and Houdini has pretty much forced a move to build a PC. To begin, I will be specifically interested in doing large dynamics simulations (pyro, flip, grain, bullet, rbd, cloth)
I've read everything I could find on the forums in terms of Houdini specific setups, and I have a few questions, if you have the time.
1st off, i'm willing to spend around $3-5k ~ with lower being more preferable, but the wiggle room to make “worthwhile” parts choices if necessary.
2nd, i'd like (if possible), to build something that is a bit “future-proof” with the ability to upgrade down the road. Especially, if there are cost reduction choices that I could make right now, that I could then upgrade later down the road.
Just to get it out of the way, i'm thinking for the GPU: EVGA GeForce GTX Titan X (down the road I could get a second for SLI)
other options are 2 x GTX 980's or 1 x GTX Titan Z
The CPU is where i'm having the most questions. Here are my thoughts:
i7 6700K Skylake or i7 5960X or (here is where i'm not sure about)… Should I go for an LGA2011-v3 MOBO DUAL Socket, then get 2 x Xeon e5 (lower end), possibly ES 2630 2.4 GHz 8 core and 20 MB Cache. Another option is to get a higher level Xeon e5 for similar price of the 2 x cheaper e5 processors. Then only install it in one socket of a dual socket, (i'll have to make sure I find a MOBO that will not be affected by this). And then, upgrade with a 2nd CPU later on, when I have some more money.
This also opens up the question (more specific to Houdini) and it's performance with Hyper Threading, dual processors, clock speed, cores, and all that.
So does Houdini benefit more from a CPU with higher Ghz and less cores, or lower Ghz and more cores? In particular, related to the basic concept of the CPU's that i'm debating up above (or if there are better choices, please point me to those, thanks!)
Overall my thinking is this, to satisfy the “longevity and upgrade” requirement that i've self imposed, I feel like if I went the dual Xeon route, it will also give me the ability to upgrade MUCH more RAM over time. For the moment I could go 64 GB or similar, then later up to 128+ also, if I went with the “cheaper” dual xeon's, i could upgrade to 2 better ones when prices drop.
On the other side, if I went with something like the new i7 6700K Skylake, it let's you upgrade to max of 64 gb (so that's better than the previous 32 GB limit). But then at that point, it won't really be “upgradeable” anymore, because I could just max that out right now. The only real upgradeability to the system would then be the CPU if they come out with a higher core Skylake later on, that still fits that same MOBO. However, there are no details on that yet.
Well that's about it for now. I appreciate your help. Cheers everyone.
Technical Discussion » geometry selection error (Mac)
- itriix
- 152 posts
- Offline
There is a selection bug (among others) in H14 for Mac.
https://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=37497 [sidefx.com]
https://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=37497 [sidefx.com]
Technical Discussion » OSX 10.10.3 & H14
- itriix
- 152 posts
- Offline
Specs for my Macbook Pro ~
OSX 10.10.3. Late 2011 MacBook Pro, 2.5 GHz, Intel Core i7, 16 GB RAM, AMD Radeon HD 6770M 1024 MB.
(My GPU is supported)
I have a topic i've been discussing how H14 is unusable for me as of the most recent daily build: https://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=37497 [sidefx.com]
OSX 10.10.3. Late 2011 MacBook Pro, 2.5 GHz, Intel Core i7, 16 GB RAM, AMD Radeon HD 6770M 1024 MB.
(My GPU is supported)
I have a topic i've been discussing how H14 is unusable for me as of the most recent daily build: https://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=37497 [sidefx.com]
Technical Discussion » OSX 10.10.3 & H14
- itriix
- 152 posts
- Offline
Houdini Lounge » H14 Mac GPU Requirements
- itriix
- 152 posts
- Offline
Just a quick check-in:
Most recent OS X update
Daily Build 14.0.331 ~ Selection bug still exists
Production Build 14.0.335 ~ Selection bug still exists
Daily Build 14.0.349 ~ Selection bug still exists
Most recent OS X update
Daily Build 14.0.331 ~ Selection bug still exists
Production Build 14.0.335 ~ Selection bug still exists
Daily Build 14.0.349 ~ Selection bug still exists
Edited by - June 1, 2015 12:34:54
Houdini Lounge » H14 Mac GPU Requirements
- itriix
- 152 posts
- Offline
goldleaf
Yes. They control OpenGL on OS X, and if they choose not to include fixes/workarounds for “older” GPUs, there isn't a whole lot we can do.
Good to know. Thanks goldleaf.
My laptop has been upgraded specifically for Houdini use. So Houdini being unusable is quite unfortunate indeed.
Update:
New H14 14.0.291 Production Build - selection bug still exists.
Houdini Lounge » H14 Mac GPU Requirements
- itriix
- 152 posts
- Offline
Does it seem plausible that the apple bugs would cause improper point, primitive and vertex selection, while leaving edge selection fine?
I can properly box select edges. But if I try to box select points or prims, I get a random selection of points or prims instead of the ones that I selected.
Something else worth mentioning is that I can manually select a single point by clicking on the individual point. I can also add to this selection, by manually selecting individual points one at a time. It's only if I try “drag-selecting” methods (which is more common), that the selection bug issue occurs (giving me a random selection of points).
Just grasping at straws here but also, just to point out that I am on a 17 inch macbook pro. Doubtful the fact that it's 17 inch could have anything to do with the issue, but just being thorough.
I can properly box select edges. But if I try to box select points or prims, I get a random selection of points or prims instead of the ones that I selected.
Something else worth mentioning is that I can manually select a single point by clicking on the individual point. I can also add to this selection, by manually selecting individual points one at a time. It's only if I try “drag-selecting” methods (which is more common), that the selection bug issue occurs (giving me a random selection of points).
Just grasping at straws here but also, just to point out that I am on a 17 inch macbook pro. Doubtful the fact that it's 17 inch could have anything to do with the issue, but just being thorough.
Houdini Lounge » H14 Mac GPU Requirements
- itriix
- 152 posts
- Offline
Are the “apple bugs” showing up in both Mavericks and Yosemite? Just curious because the selection bug that I have occurs in both OS's.
Houdini Lounge » H14 Mac GPU Requirements
- itriix
- 152 posts
- Offline
tinyparticle
Right now I am using 14.0.258 with Nvidia on MacOSX but still having crashes and issues. It's good to know that Apple has acknowledged the bugs and there will be a fix.
I'm curious what other issues are you having with it besides crashes? Are you experiencing the “selection” issues that I have?
-
- Quick Links