Is photogrammetry is really that important? Most of the things I want to show people don't exist so I can't take a photo of it. I can't take a photo of a spaceship, or a new building I want to have constructed. Maybe if I could ask an AI model "build me an interactive set of a submarine" so that all the buttons, dials, switches, etc are clickable and I can add scripts to them because I won't get that from photogrammetry directly.
i do agree content creation needs to be more accessible. I'm not entirely sure something like dreams is the solution or if just the interesting things we see are the 0.1% of the 9 million players they have that happen to have talent and or patience.
There are a bunch of VR creation tools. Google Tilt Brush, Google Blocks, Oculus Medium, Oculus Quill, Gravity Sketch, and probably 10 others. They excel in that they are arguably way more approachable than traditional 3D software like Maya or 3DSMax but they also still come short, mainly like Photoshop or Painter or Illustrator they are just tools and if you're not an artist and willing to put in long hours of training they won't help you make pretty things. People have shipped VR titles that used art made in some of those though and at least for simple prototypes I know a few people that have used Google Blocks to build a test level for a game.
I think one way forward is AI or algorithmically enhanced content. That and maybe voice recognition? Let's say I want to build a living room set. I need probably min 50 models. A sofa, a table, stuff on the table, bookshelves, stuff on the bookshelves, an entertainment center, the stuff in the entertainment center, wall decorations, windows, doors, stuff outside the window even if just a facade, etc... Just the time finding those assets takes hours and that assumes I'm not picky. Whether those assets are online or in app it will still take hours to select them all, especially if I remotely care what their style is.
But, if I could say as in voice control, "give me a sofa", "slightly larger", "more rounded edges", "make it, green", "lighter green" etc... and the system responded both by making the change and maybe displaying a relevant set of sliders for that feature and some AI was generating them the same way we have the face generator today then I could build the entire living room in 10-20 minutes.
The tech seems close in that it seems like it exists but yet another problem, at least with current AI models, is there's just too many training sets needed. Even photo content recognition sucks. I can search for "car" or "airplane" on Google Photos but I can't search for "gull wing door jaguar" or "P-38 Lighting"