Unreal

From cgwiki
Jump to: navigation, search

A bunch of notes while I learn some unreal for a work project. Might be useful at some point...

Alembic from houdini to maya drops uvs

Ok not unreal, but related to the unreal stuff I'm playing with at the moment...

This happened a few times, could see in the maya script editor a complaint that the number of uvs didn't match the expected count. After some playing, it looks like a few poly faces that were passable in houdini were too broken for alembic. A clean sop to allow manifold only geo identified this. In my case it was a ngon with many sides, which i could identify and pre-triangulate with a divide sop. After that, uv's appeared in maya again.


Displacement

Our generated disp maps are too small for unreal, so they need to be run through a multiply node to boost them. 10 seems a good starting point. This goes to the 'world displacement' slot on the main material.

The material itself needs to be told to do fancy realtime tessellation, under the tessellate properties turn it to either flat or PN triangles, enable adaptive. If you flip to wireframe (alt-2 in the material preview window), you should see the triangle count go up and down as you zoom in and out. The default tessellation is a little low for my tasts, so connect a constant to the tessellation multiplier slot of the material, and boost it to say 2 or 3. Gotta be careful with this, obviously!

Cos our maps are just height, they need to be explicitly multiplied against the worldspace normals, with a VertexNormalWS node. I got that tip from here:

http://www.tharlevfx.com/unreal-4-world-position-offset/

Enabling Screen Space Reflections

Settings -> Engine Scalibility Settings -> Effects -> Cinematic

Editing materials in unreal involves recompiling/saving each time, but its considerably longer and more irritating than a quick recompile of a vop network. There's something similar to promoted parameters to avoid compilation (called 'parameters'), but to use them you need to create instances. The houdini analogy is if you were to promote parameters on a vop network, but then to use them, you have to create an HDA out of your vopnet, create a new instance, and you can only use those promoted parms on the new instances (and only within the vop network editor, not 'on the front' of it).

Scripting via UnrealJS

There's no vop/vex analogy, so native unreal either needs to use blueprint (vops) or C++, there's no middle ground built-in. Epic added hooks to allow for scripting engines a while ago, Epic provided a Lua example which has lapsed, SkookumScript looks pretty good, I think I read somewhere that Epic plan to make their own scripting engine at some point. In the meantime a guy has bolted Chrome's fast V8 javascript engine in, its very promising. For the all the hate and heat javascript gets, its close enough syntax wise to vex to be non threatening, and the browser wars mean that V8 is very fast. There's 2 youtube vids explaining how to get the basics going, and an interesting general one of how a talented javascript guy who's done a lot of the google chrome experiments has fallen into unrealJS and is doing cool things.

It's now available as a plugin directly in the unreal asset library, I've managed to make it say 'hello world' and create text that displays in game engine, but nothing beyond that yet.

Houdini to prototype blueprint

On a fast gaming laptop blueprint is still a little too slow to use interactively. The basics of blueprint, especially in terms of texture operations, maps loosely onto vops. I've been doing experiments in vops to work out uv operations, then use what I've learned there and recreate networks in blueprint. There's an irony here of using Houdini for realtime feedback on realtime shaders, because the shader editor for a realtime engine like Unreal isn't realtime enough. :)

Change material parameter with keypress at runtime

References


Summary

  • Materials can't be changed without being recompiled, like vops, but many times slower
  • Also like vops, you can promote parameters to avoid this recompilation, but you can't use materials directly this way
  • Making a material instance of the original lets you change those parameters, but its only in the editor, not runtime
  • To change material parameters at runtime, you need to create a dynamic material instance, which can only be created and assigned in code/blueprint.

Workflow

Define keypress event

  1. go to settings -> project settings, input section
  2. new axis mapping, name it
  3. define a key with the dropdown
  4. define a key to do the reverse action if needed, set its scale to -1


Make a dynamic instance material from your instanced material at runtime:

  1. Level blueprint, 'create dynamic material instance' function
  2. set dropdown to the material instance


Assign that material to your object at runtime

  1. Choose object, edit blueprint, construction script
  2. use that event to trigger a 'create dynamic material instance' function
  3. drag in a variable of the static mesh, use as target,
  4. drag out the return value to set a variable we can call from another blueprint soon


Link keypress event to parameter:

  1. open the event graph now for the same object
  2. drag in the variable you just made
  3. create a 'set scalar parameter value', link variable to target
  4. r.click, look for the name of the keypress axis event you defined earlier (should be in the menu inputs -> axis events )
  5. link its trigger event to the event input of the 'set scalar parameter value'
  6. manually type the parameter name into the purple field (there MUST be a way for this to introspect the names right?)
  7. set the value you want in the green field


Force this blueprint to be aware of player keyboard input

  1. in same graph, link an 'event begin play' to a 'enable input' function
  2. create a 'get player controller', feed that to the 'player controller' input


Incrementally add to the parameter when the key is pressed

  1. insert a 'get scalar parameter value' function in between the axis event and the 'set scalar parameter value' function, wire it up so it also read the same parameter name, and is linked to the same dynamic instance material
  2. create a 'float +' node to add the return value from the 'get scalar parameter value', and the axis value from the axis event
  3. send this value to to the 'set scalar' function
  4. if the increments are too big, insert a 'float x' after the input axis value, and set the second term to, say, 0.001 to slow it down.


Unreal keypress.gif

Make widgets support clicks from gear vr

References

Summary

  • A widget by default watches for mouse click events
  • The playercontroller needs a widgetinteraction component to provide those clicks
  • The gearVR sends touch events, not clicks, so the playercontroller needs to listen for touch, and create press/release pointer key events to simulate clicks.

Workflow

Creating a widget is covered in this guide: https://docs.unrealengine.com/latest/INT/Engine/UMG/HowTo/InWorldWidgetInteraction/index.html

To make the playercontroller listen to input, look in world settings (window -> world settings), and find the playercontroller entry that's assigned. If you have a custom one already that can be edited, great, edit it, otherwise make a new playercontroller blueprint in the content browser, and assign to the world settings.

Edit the playercontroller blueprint, make sure the component tab is visible (window -> components), add a widgetinteraction component.

Edit the event graph for the playercontroller blueprint, add an 'Input Touch' event. Annoyingly this is hidden in the r.click menu, and also mislabelled. Turn off context sensitive, and search for 'touch', its the last entry in the 'input touch' subfolder.

Use its pressed and released events to drive a 'press pointer key' and a 'release pointer key' node respectively, with the key set to 'left mouse button'. Control-drag in the widget interaction variable, and wire that up as the target. To make it easier to test on the desktop, you can bind a regular keyboard key to also drive the press and release pointer key functions.

Player controller blueprint widget.jpg

Click the 'class defaults' button at the top, find the input section towards the bottom of the details view, and set "auto receive inputs" option to 'player 0', so it will listen to touch and keyboard events.

Playercontroller widget input.jpeg

Now select the widget interaction component in the top-right component view, and on its details panel set the interaction distance to what you need, and set the interaction source to 'center screen'.

Widgetinteraction settings.jpg

With all that done, you should be able to go back to the widget and its event graph blueprint, and add an 'on pressed (button 1)' event to drive behaviour, and it should all work.

Widget onpressed bp.jpg

Button 'calls', other things 'bind' to call via event dispatch

Eventdispatch widget.jpg

I have a widget button, I want something else to react when the button is pressed.

The base tech is event dispatch, covered very well here:

https://forums.unrealengine.com/showthread.php?100929-Event-Dispatchers-explained-Finally-!

The main problem is on the recieving end, you need to know the name of what you're listening to. In most examples this is the player itself, or the level, or a collision that gives you the name of the thing colliding, but in my case I couldn't find a clear way to get the name of the button (a widget blueprint).

This was complicated by the fact that with a widget, you have the 'actor', which is essentially the transform, and it contains a reference to a widget, which is the button, and it in turn makes refrence to the widget class, which is the blueprint code.

I naively thought 'aha, I'll just embed the name of the widget blueprint in the dispatch event, and the listener can extract it directly from there', but in hindsight that'll never work. The listener needs to bind itself to the eventdispatch, to do that it needs a target, ie the name of something that's calling. My logic means that it listens to the dispatch to get the name, but without the name it can't listen to the dispatch. Catch-22!

Instead, I found I had to brute force it, so on the beginplay event, find all actors that are widgets, loop through each one, get its widget component, get the explicit blueprint widget, and thats the name to use as the target. I'm sure this'll fail spectacularly later, but for now, it works.

Making a 360 panorama stills viewer with hotspots

Overview

Blathery notes, refine later....

So the idea was similar to old myst games; have a bunch of panos that are linked together, than you can click through with hotspot zones. I'd planned to generate these by loading up a big environment set, bring in a panoaram camera, and walk through the set, rendering an image wherever I felt it was interesting. How to load this in Unreal?

The end technique is super lazy. In Unreal I make a sphere, scale it up, and drag the first image texture onto it. If you snap the camera to the center of the sphere, and rotate around that point, you're viewing the pano in all its pano glory. I then copy/paste the sphere, drag the second image onto it, and then translate it to where its meant to be. If you keep the camera at the first position, and observe the outline of the second sphere as you move, you can see where the relative hotspot will be.

Do this for all the images, dragging on textures, placing spheres, eventually you have all your pano spheres laid out. Neato. Thinking out loud, it'd be wise to record the camera positions as nulls, export fbx, and bring them directly into Unreal, saving any eyeballing or boring drudge work. Hmm.

The game logic is pretty simple. Here's some magical pseudocode:

  • On game start, snap the camera to the first sphere
  • On every game tick:
    • Trace a ray from the camera, getting a list of the objects it intersects
    • If there's more than one object:
      • Display a hotspot to show we can click in this direction
      • If the user clicks:
        • Get the second object (the first is the sphere we're currently in, so ignore that)
        • If its a sphere:
          • Get its transform
          • Fade the screen to black
          • Teleport the player to that next transform
          • Fade up
    • If there's not more than one object:
      • Hide the hotspot to show there's nothing to click in this direction


That's the core idea. Means I don't have to track buttons per state, or drive it from a spreadsheet, or do any book-keeping; if there's a line of sight from one sphere to the next, you can click in that direction. The reason I test that the second object is a sphere is to allow me to put up blockers; I can create walls and such between the spheres, so if the ray test hits that, it stops the rest of the logic.

Blueprint code

Behold!

Pano blueprint all.jpg

White wires are events, they show how excecution flows. A handy feature is you can hit play in the editor, and click stuff, do things, and the wires will glow to show you how events are being triggered and control flow is being altered. Blue lines are object references, yellow are transforms, red are booleans, pink are strings, green are floats.

Breaking this down:

Begin play

Pano level start.jpg

Unreal uses events for most of its behaviour triggering. Pressing the 't' key is an event, clicking the mouse, tilting your gamepad, or in this case, the event is simply 'begin play'. The aim here is to snap the camera to the first sphere, so when play begins, I get a list of all the 'actors' that are static meshes (ie, all the spheres), then do a for each loop to iterate through them. In the loop I match their name to the one I'm after, in this case anything ending in "001". I then get the camera player manager, and the transform of the first sphere, and snap the player to that location.

What was interesting the first time was that there's several things that are identified as 'the player', took a few goes to work out the right one. There's the player controller, the player pawn, and the camera. Still getting my head around it all, but skimming a few QnA's on stackoverflow implies that a player controller is the 'brain' of the player, where the logic goes, all that stuff. The 'pawn' is the physical entity in the level, which you can think of as being puppeteered by the player controller. Another way to think of the distinction is if you were playing a first person shooter, a player could die and restart many times over; each time the pawn is disengaged and a new one is made elsewhere, but the player controller stays persistent.

In this case I link to the camera player manager. Why? Trial and error. I found curious behaviour when using the others, which implies more a failing of logic on my part than Unreal I think; if I used the controller then I'd inherit translation and not rotation. If I used the pawn, the reverse. Using the player camera worked as expected.

Pano attach blue hotspot.jpg

The extra bit here is to parent a sphere to the camera. This is an unlit, slightly transparent blue sphere, which in the default settings is parented with offset so its down the axis of the camera, about 500 units away. This is what I use for the hotspot indicator, and I toggle its visibility later on.

Game tick

Pano trace start end.jpg

So this is what happens on every frame update of the game, there's probably more efficient ways of doing this, but for this simple setup, its not too much of a problem.

First I grab the camera, get its position, get its forward vector, and construct another position along this vector, many units away. Its this sort of stuff that makes me wish blueprint had the equivalent of a vex wrangle. Still, works.

Pano debug trace values.jpg

This is a little example of printing stuff to the screen, and constructing strings. Again, the lack of a text field to just construct strings is irritating, but this works well enough. Note the unlabelled casting nodes; blueprint sets up a lot of that stuff for you when connecting almost but not quite similar things. Because blueprint networks are context free, there's an incredible amount of node types, too many to comfortably browse through by default. As such, the selection-aware preselect of the node menu is very handy. Drag from the output of a node, let go, and you get a reduced list that just pertains to that type of output. This also works for inputs, so you can drag out from an input, and get a similar reduced menu. And finally, you can connect an output of one node to an input of another, and if they can be connected via a type conversion, that'll be made for you too.