Hello Blender Wizards!

I am seeking your help in trying to solve a GIS problem using Blender. Any help, pointers or general discussion related to this will be highly appreciated.

I am a Blender n00b but I am aware that Blender has a GIS plugin that helps in creating cityscapes by capturing terrain, buildings etc. from GIS maps. Suppose a city with 3D buildings, parks, lakes has been created. Now, I need to find all dwelling units from which a particular park/lake is visible.

GIS has something called a viewshed analysis which can be used to find area which will be visible from any given point. But that is the limitation, it just gives the view from a point, not a whole area.

My idea is to create stack dwelling units (apartments in high rises) as white objects having unique Object IDs in Blender and parks/lakes as colored light sources. Upon rendering, it is easy to see what dwelling units are lit up in which color. That is all good for visual analysis.

My question is, is there any way in Blender to get Object IDs of Objects that have non-white colors on their face? Or do I have to take the help of a Gaming Engine for this?

Looking forward to the responses. Cheers!

    • DontNoodlesOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Yes, in coming to Blender from GIS end of things. I’m aware about most of the GIS tools and software and couldn’t find anything relevant that would help me solve this problem. Thanks for the suggestion nonetheless.

  • g6d3np81@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Geometry node with raycast
    EDIT: Removed rough description

    After some fiddling… I have ran into a problem of how to raycast from many-to-many points.
    Currently stuck to either boolean visible or not visible. In real life you would want a some form of how much you see it (float value).
    Will get there eventually, this is a nice exercise.

    EDIT again: Update with new top level comment

    • DontNoodlesOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Thank you for the reply. I look forward to your model and in the meanwhile I’ll try to understand what you said with the help of keywords you mentioned.

      • g6d3np81@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Yep… this is tough. After reading through another of your comment, I’m not sure if geometry node alone can handle this.

        The answer/solution will depends on how accurate or detailed you want for the result.
        If you count any single point on a building that can see any amount of a park as a ‘pass’, that will be easy. But when you start to dice them down to each floor or room unit, then it will be a bit harder. If you also want to check how much of the park you can see from certain room too, it will be even harder. If you want to list all that, I think you will have to write python code. The complexity will scale with amount of rooms, buildings, parks.

        If the amount of objects that you considered ‘lit up’ is possible to select and move into another collection by hand in reasonable time, then maybe geo node is enough. From your result, how many are those?

        Spreadsheet window (geometry node workspace) can list vertices/edges/faces and other things with custom attribute they have (in this case, see a park or not see a park). It can also list name of object in a collection but I’m not sure if it can also display extra data alongside it though, it’s not the right tool for the job.

        • DontNoodlesOP
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Thank you for taking the time to try it out. For now just finding of any part of the building can see any part of the park should suffice.

          As I mentioned in one of my other replies, I’m mainly a GIS person and a Blender n00b even though I’ve been calling myself that for a couple years now.

          I saw some videos on geometry nodes and raycast nodes and my faith in Blender as a solution to the problem has become stronger. I’d never really seen the spreadsheets in Blender and mention of attributes makes me wonder if it is the same concept as in GIS where each feature (like object in Blender) has some properties associated with it. I have some exposure to Python, though not as part of Blender.

          I think that a good academic paper and a tool could be developed out of it and it has underscored my conviction that the divide between gaming related tools and GIS is at the cusp of merging.

          Maybe this will serve as my motivation to learn Blender though I want to skip the unnecessary parts for now and just want to learn stuff that will help me learn and tackle this problem. Maybe you could give me a broad roadmap of what to learn.

          Also, please share a minimum working example of the thing that you tried if possible. If you are able to experiment further, I’ll be delighted to learn.

          Thanks again.

  • Oscar Baechler@mastodon.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    @DontNoodles As I understand it, you want to:

    -use OSM
    -to import GIS into Blender
    -with the BlenderGIS add-on, I take it?
    -then use color info from that
    -to drive selections
    -for making buildings in white areas, populating green areas with trees, populating blue areas with water, etc?

    • DontNoodlesOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I’m with you till the third bullet point. Thereafter, I’m not planning to use the color info from the OSM/GIS tool. Instead, what I’m suggesting is:

      • Create buildings with each floor as a separate blender object, white in color. Even a simple cuboid will do.
      • Create park/lake as other objects. Only this whole object will serve as a colored light source in an otherwise totally dark scene and illuminate parts of buildings where light can reach without getting occluded. My assumption is that these portions of the buildings which are now lit up are the places from where you can see the light (park/lake for us).
      • The challenge is to identify and list the IDs of these building objects which are now lit up.

      I hope this rewording makes my query more understandable. English is not my first language.

      • Adalast@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Not a Blender user myself, but these answers are general CG concepts. What you want is lighting baking. You can cook the lighting down to a UV-mapped texture. Then you bake out the IDs on the same UV coordinates and you only have to compare the two.

        The other method that I have employed for a similar question using Houdini is a directional variant of Ambient Occlusion.

        • Scatter points on your buildings
        • Scatter points on your lakes/parks
        • Loop over the points on your building and from each attempt an intersection test with a ray pointing to each point on a park/lake
        • If it works, increment a counter, if not, don’t.
        • Find the average number of rays that are able to see the feature.
        • Store the average on the points.

        You can do this for each method to see how much of each feature is visible to each part of the buildings, then just store the floor number on each point as well, and bada boom, you have your mapping. Just have a shader sample the point values and bake it out to a texture.

        There are many advantages of an AO model over trying to use Light Baking to get the same info. Primarily, speed. You don’t need nearly as many sample points on either end to calculate it. Secondly, you get much more control over the details and can extract statistical information about the visibility. You can sample values on each sample point that can be aggregated and interrogated while it is doing all of the AO calculations. I can go on about this, but it would likely only become relevant once you saw how it worked. As a for instance, you can place indexes on points in a park to represent points of interest, like a fountain or gazebo, and then as the visibility is being sampled you add the index to a set if the ray is successful. Boom, now you also have a mapping of not only how much of the park that spot can see, but also which points of interest it can see with essentially 0 increase to calculation time. To do the same with lighting baking you would need to do a separate render. Also, for lighting you have to worry about falloff on the light, so it becomes difficult to use over a certain distance.

        • DontNoodlesOP
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I totally get your point as to how it may be faster. Let me read up about ambient occlusion and since I’ve never worked with Houdini, whether I can implement the same using any of the tools that I’m conversant with. Thank you for making me aware of this.

          • Adalast@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Pretty sure Blender has a Python API. And AO research on its own probably won’t yield much juice for what I described. The topic of limiting the scope of AO calculations to calculate other things is kinda not a thing. I actually used it as the subject of my Master’s thesis. I was using it to calculate the exposure of scenes to the solar ecliptic throughout the year so I could calculate fading from direct sun exposure on textures. That is why I shared a step-by-step instead of a link. The principle for what you want to do is the same though. Measuring the exposure of geometry against some other geometry.

            From what I just read, you want to use the Scene.ray_cast() function. Usage should be straightforward.

            For point in buildingPointCloud:
                TotalHit=0
                For destination in parkPointCloud:
                    hit, _, _, _, _, _ = Scene.ray_cast(depsgraph, point, destination-point, length(point-destination))
                    TotalHit += hit
                Visibility = TotalHit/len(parkPointCloud)
                FunctionToStoreVisibilityOnPointOrInList(Visibility)
            

            That should be enough to get you started. I am 99% unfamiliar with the Blender Python API, but this should get you there if you are even remotely experienced with it. There are obviously optimizations that can be done as this is very brute force, I was just trying to illustrate the basic loop.

                • DontNoodlesOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  Currently, I’m trying to find a lazy (for me) way out. I’m learning to bake the lighting on objects and figuring out ways to iteratively do it for all objects of choice (buildings) in a scene automatically. Thereafter, i hope to do image processing on the unwrapped light baking maps to detect the desired colors. It should be possible to crop these images to detect light on individual faces and or find the percentage of area exposed too.

                  If this does not work out, I’ll take the progressively difficult ways suggested in the thread as I learn and become comfortable with what you guys have kindly given me pointers for.

  • g6d3np81@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    If you’ve just open Blender for the first time and then you’ve tried to… do things.
    Maybe, trying to see how far you can go… without getting help. [laugh]
    You might have discovered… you need help immediately.

    -- some donut guy


    This is janky but will do the best I can. Maybe someone way smarter than me can find a more efficient way to do it.

    I’m doing this demo on some park with Butt in Germany (OSM)
    Blend file
    Layout window
    Node setup

    Limitation
    Line of sight is made from the ‘origin’ of (building) geometry (one-to-many).

    I have tried the same method but in reverse and combine lines of sight (many-to-many) and iterate through them but can not get any result, ‘position node’ output is changed based on whatever you are using with it and two sets of point cloud hurts my head.

    However, with this node setup if you model each room as separated object, it will still work but at room level instead of building level. Wowowee itsa very nice. Just no way to quickly get an ID that I know of.

    Process

    1. Import GIS stuff as usual
    2. You MUST set the origin of each object to its own geometry (orange dot must stay in the geometry) as GIS plugin will set origin as world center 0,0,0 and that will break stuff.
    3. Any object that will potentially obstruct the view must be put inside a flat one-level collection.
      3.1 Buildings to analyze can be put in any collection or not at all.
      3.2 If any potentially obstructing object is also an analyzing target from (3.1), it should be duplicate-link with Alt+D then ESC and put in the obstruction collection.
    4. View target e.g. park, lake, must have face, can not be just curves, edges, vertices.
    5. Add geometry node modifier to any buildings you want analyzed.
      5.1 Browse for a node named ‘visibility’ I already set up.
      5.2 Select target.
      5.3 Set ray length (meter) long enough to reach your target.
      5.4 Select any other buildings you want analyzed, with copy source in yellow, paste target in orange hilight. Ctrl+L to copy geo-node modifier to all of them.
    6. Open spreadsheet window and select a unit to see visibility value.

    EDIT: I forget to expose obstruction selector, you can select it inside geometry node window. I also got an idea for V2 which may solve issue of getting IDs list. Will try again tomorrow.

    HAPPY ACCIDENT: you can also stack the modifier and select another target, spreadsheet will show another line of visibility attribute for each additional target in the stack.

    Be careful though, too much and blender might crash.


    I don’t recommend watching/doing the whole donut series, but you still need some familiarity to get around and understand more complex part of blender that other tutorials may teach you :|

    Maybe you can cobble together something even better from these tutorial channels. Erindale, Chris P

    • DontNoodlesOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Thank you very much. This gives me ample pointers to go out and learn.