UnrealEd

From cgwiki

A bunch of notes while I learn some unreal for a work project. Might be useful at some point...

Important hotkeys, navigation

While holding down any mouse button, WSAD works. If you hold down the right mouse button you can then mouselook while moving, which is the most useful way to move around. While in this mode, the mousewheel will alter speed. Q and E move up and down.

G will toggle scene icons, handy when you have a million reflection capture/blueprint nodes in your scene.

F11 toggles fullscreen

alt-backtick will escape the vr editor mode pre 4.17; as of 4.17 its now alt-v.

Alembic from houdini to maya drops uvs

Ok not unreal, but related to the unreal stuff I'm playing with at the moment...

This happened a few times, could see in the maya script editor a complaint that the number of uvs didn't match the expected count. After some playing, it looks like a few poly faces that were passable in houdini were too broken for alembic. A clean sop to allow manifold only geo identified this. In my case it was a ngon with many sides, which i could identify and pre-triangulate with a divide sop. After that, uv's appeared in maya again.

Import fbx as full scene hierarchy

We had a scene that had been setup in maya; nicely laid out, correct pivots, names, groups etc. When imported via the content browser it'd lose all the hierarchy, bake the pivots back to the origin. If you dragged all the objects into your scene and set their translate to 0 0 0, the entire scene would be correct, but the broken pivots and no groups made it all a little useless.

Instead, from the file menu choose 'file -> import to level'. This gives you some extra options, leave them at their defaults, ue4 will spin for a bit, but it will import all the objects from your fbx, but also pull them into the current map with the correct groups and pivots.

Displacement

Our generated disp maps are too small for unreal, so they need to be run through a multiply node to boost them. 10 seems a good starting point. This goes to the 'world displacement' slot on the main material.

The material itself needs to be told to do fancy realtime tessellation, under the tessellate properties turn it to either flat or PN triangles, enable adaptive. If you flip to wireframe (alt-2 in the material preview window), you should see the triangle count go up and down as you zoom in and out. The default tessellation is a little low for my tastes, so connect a constant to the tessellation multiplier slot of the material, and boost it to say 2 or 3. Gotta be careful with this, obviously!

Cos our maps are just height, they need to be explicitly multiplied against the worldspace normals, with a VertexNormalWS node. I got that tip from here:

http://www.tharlevfx.com/unreal-4-world-position-offset/

Enabling Screen Space Reflections

Settings -> Engine Scalibility Settings -> Effects -> Cinematic

Turn off static lighting

Edit -> Project Settings -> Rendering -> Uncheck "Allow Static Lighting" under Lighting

Then build lighting once to disable all the lightmap warnings

Forward rendering and MSAA

Also in project settings (using the parm filter at the top is the easiest way to find the things, search for 'forward' and 'alias')

Editing materials in unreal involves recompiling/saving each time, but its considerably longer and more irritating than a quick recompile of a vop network. There's something similar to promoted parameters to avoid compilation (called 'parameters'), but to use them you need to create instances. The houdini analogy is if you were to promote parameters on a vop network, but then to use them, you have to create an HDA out of your vopnet, create a new instance, and you can only use those promoted parms on the new instances (and only within the vop network editor, not 'on the front' of it).

Unreal materials vs vops cheat sheet

'Where's vector to float? Where's set component? Where's my ramp parameter?' Most are in there, but under different names, here's the ones I'm leaning on. A handy high level overview of the main shading nodes is here: https://docs.unrealengine.com/latest/INT/Engine/Rendering/Materials/ExpressionReference/index.html#expressiontypes

  • BreakOutFloat3Components- A Vector-to-Floats vop. feed it a vector, it returns 3 floats. This also has a 2 float and 4 float version.
  • MakeFloat3 - A float-to-vector vop. Feed it 3 floats, it constructs a vector. This also comes in a 2 and 4 flavour version.
  • StaticSwichParameter - A basic 2-way switch vop, where the true/false is exposed as a toggle when you create a material instance.
  • TextureCoordinate - A uv vop
  • SmoothCurve - Best equivalent to a chramp. Feed it a float, like the x coord of a texturecoordinate, and you have 2 values which control the tangents at the start and end of a 0-1 curve.


Plotcurve.gif

  • OneMinus - A compliment node (took me ages to find this!)
  • ConstantBiasScale - A fit range.
  • Fresnel - Fresnel, obviously . End up using this a lot in materials


and some that don't have a Houdini equivalent, but sound handy:

  • QualitySwitch - Something to keep an eye on for later, a multi-switch that ties into the engine low/med/high settings for controlling performance.
  • Panner - Handy built in function to scroll texture coordinates with time, like doing a @uv.x+@Time in a wrangle.
  • PlotFunctionOnGraph - Super handy tip from Wyeth, can use this to visualise the result of the SmoothCurve (or other properties no doubt):
  • FlipBook - What you use to read from a texture atlas/mosaic/thumbnail sequence. Give it the number of sub images across/down, it will animate through them.
  • Sine_Remapped - A sine function pre-remapped to a 0-1 range, but has inputs exposed to map to whatever you want.


and some bonus entries from Mickaëlle Ruckert, cheers!

  • Append - curious one with no direct analogy in houdini; append 2 floats, get a vector2. Append a float to that, get a vector3. One more, vector4.
  • LinearInterpolate - a lerp
  • SphereMask - like doing a clamp(length(@P),0,ch('d')), but all in one handy single node.

Scripting via UnrealJS

Blueprint is like Vops, and native C++ is like the inlinecpp sop, but there's no vex equivalent in unreal (ie, an artist friendly scripting language, let alone an artist friendly compiled language). Epic added hooks to allow for scripting engines a while ago, and provided a Lua example which has since been abandoned. Looking for alternatives, SkookumScript looks pretty good, and python support in the editor is due soon (4.19 with any luck), but the one that caught my eye is UnrealJS.

Developed by a guy at a South Korean games company for their own projects, he's bolted Chrome's fast V8 javascript engine to UE4. For the all the hate and heat javascript gets, its close enough syntax wise to vex to be non threatening, and the browser wars mean that V8 is very fast. There's 2 youtube vids explaining how to get the basics going, and an interesting general one of how a talented javascript guy who's done a lot of the google chrome experiments has fallen into unrealJS and is doing cool things.

It's now available as a plugin directly in the unreal asset library, I've managed to make it say 'hello world' and create text that displays in game engine, but nothing beyond that yet.

Houdini to prototype blueprint

On a fast gaming laptop blueprint is still a little too slow to use interactively. The basics of blueprint, especially in terms of texture operations, maps loosely onto vops. I've been doing experiments in vops to work out uv operations, then use what I've learned there and recreate networks in blueprint. There's an irony here of using Houdini for realtime feedback on realtime shaders, because the shader editor for a realtime engine like Unreal isn't realtime enough. :)

Change material parameter with keypress at runtime

References


Summary

  • Materials can't be changed without being recompiled, like vops, but many times slower
  • Also like vops, you can promote parameters to avoid this recompilation, but you can't use materials directly this way
  • Making a material instance of the original lets you change those parameters, but its only in the editor, not runtime
  • To change material parameters at runtime, you need to create a dynamic material instance, which can only be created and assigned in code/blueprint.

Workflow

Define keypress event

  1. go to settings -> project settings, input section
  2. new axis mapping, name it
  3. define a key with the dropdown
  4. define a key to do the reverse action if needed, set its scale to -1


Make a dynamic instance material from your instanced material at runtime:

  1. Level blueprint, 'create dynamic material instance' function
  2. set dropdown to the material instance


Assign that material to your object at runtime

  1. Choose object, edit blueprint, construction script
  2. use that event to trigger a 'create dynamic material instance' function
  3. drag in a variable of the static mesh, use as target,
  4. drag out the return value to set a variable we can call from another blueprint soon


Link keypress event to parameter:

  1. open the event graph now for the same object
  2. drag in the variable you just made
  3. create a 'set scalar parameter value', link variable to target
  4. r.click, look for the name of the keypress axis event you defined earlier (should be in the menu inputs -> axis events )
  5. link its trigger event to the event input of the 'set scalar parameter value'
  6. manually type the parameter name into the purple field (there MUST be a way for this to introspect the names right?)
  7. set the value you want in the green field


Force this blueprint to be aware of player keyboard input

  1. in same graph, link an 'event begin play' to a 'enable input' function
  2. create a 'get player controller', feed that to the 'player controller' input


Incrementally add to the parameter when the key is pressed

  1. insert a 'get scalar parameter value' function in between the axis event and the 'set scalar parameter value' function, wire it up so it also read the same parameter name, and is linked to the same dynamic instance material
  2. create a 'float +' node to add the return value from the 'get scalar parameter value', and the axis value from the axis event
  3. send this value to to the 'set scalar' function
  4. if the increments are too big, insert a 'float x' after the input axis value, and set the second term to, say, 0.001 to slow it down.


Unreal keypress.gif

Make widgets support clicks from gear vr

References

Summary

  • A widget by default watches for mouse click events
  • The playercontroller needs a widgetinteraction component to provide those clicks
  • The gearVR sends touch events, not clicks, so the playercontroller needs to listen for touch, and create press/release pointer key events to simulate clicks.

Workflow

Creating a widget is covered in this guide: https://docs.unrealengine.com/latest/INT/Engine/UMG/HowTo/InWorldWidgetInteraction/index.html

To make the playercontroller listen to input, look in world settings (window -> world settings), and find the playercontroller entry that's assigned. If you have a custom one already that can be edited, great, edit it, otherwise make a new playercontroller blueprint in the content browser, and assign to the world settings.

Edit the playercontroller blueprint, make sure the component tab is visible (window -> components), add a widgetinteraction component.

Edit the event graph for the playercontroller blueprint, add an 'Input Touch' event. Annoyingly this is hidden in the r.click menu, and also mislabelled. Turn off context sensitive, and search for 'touch', its the last entry in the 'input touch' subfolder.

Use its pressed and released events to drive a 'press pointer key' and a 'release pointer key' node respectively, with the key set to 'left mouse button'. Control-drag in the widget interaction variable, and wire that up as the target. To make it easier to test on the desktop, you can bind a regular keyboard key to also drive the press and release pointer key functions.

Player controller blueprint widget.jpg

Click the 'class defaults' button at the top, find the input section towards the bottom of the details view, and set "auto receive inputs" option to 'player 0', so it will listen to touch and keyboard events.

Playercontroller widget input.jpeg

Now select the widget interaction component in the top-right component view, and on its details panel set the interaction distance to what you need, and set the interaction source to 'center screen'.

Widgetinteraction settings.jpg

With all that done, you should be able to go back to the widget and its event graph blueprint, and add an 'on pressed (button 1)' event to drive behaviour, and it should all work.

Widget onpressed bp.jpg

Button 'calls', other things 'bind' to call via event dispatch

Eventdispatch widget.jpg

I have a widget button, I want something else to react when the button is pressed.

The base tech is event dispatch, covered very well here:

https://forums.unrealengine.com/showthread.php?100929-Event-Dispatchers-explained-Finally-!

The main problem is on the recieving end, you need to know the name of what you're listening to. In most examples this is the player itself, or the level, or a collision that gives you the name of the thing colliding, but in my case I couldn't find a clear way to get the name of the button (a widget blueprint).

This was complicated by the fact that with a widget, you have the 'actor', which is essentially the transform, and it contains a reference to a widget, which is the button, and it in turn makes refrence to the widget class, which is the blueprint code.

I naively thought 'aha, I'll just embed the name of the widget blueprint in the dispatch event, and the listener can extract it directly from there', but in hindsight that'll never work. The listener needs to bind itself to the eventdispatch, to do that it needs a target, ie the name of something that's calling. My logic means that it listens to the dispatch to get the name, but without the name it can't listen to the dispatch. Catch-22!

Instead, I found I had to brute force it, so on the beginplay event, find all actors that are widgets, loop through each one, get its widget component, get the explicit blueprint widget, and thats the name to use as the target. I'm sure this'll fail spectacularly later, but for now, it works.

Making a 360 panorama stills viewer with hotspots

Overview

Blathery notes, refine later....

So the idea was similar to old myst games; have a bunch of panos that are linked together, than you can click through with hotspot zones. I'd planned to generate these by loading up a big environment set, bring in a panoaram camera, and walk through the set, rendering an image wherever I felt it was interesting. How to load this in Unreal?

The end technique is super lazy. In Unreal I make a sphere, scale it up, and drag the first image texture onto it. If you snap the camera to the center of the sphere, and rotate around that point, you're viewing the pano in all its pano glory. I then copy/paste the sphere, drag the second image onto it, and then translate it to where its meant to be. If you keep the camera at the first position, and observe the outline of the second sphere as you move, you can see where the relative hotspot will be.

Do this for all the images, dragging on textures, placing spheres, eventually you have all your pano spheres laid out. Neato. Thinking out loud, it'd be wise to record the camera positions as nulls, export fbx, and bring them directly into Unreal, saving any eyeballing or boring drudge work. Hmm.

The game logic is pretty simple. Here's some magical pseudocode:

  • On game start, snap the camera to the first sphere
  • On every game tick:
    • Trace a ray from the camera, getting a list of the objects it intersects
    • If there's more than one object:
      • Display a hotspot to show we can click in this direction
      • If the user clicks:
        • Get the second object (the first is the sphere we're currently in, so ignore that)
        • If its a sphere:
          • Get its transform
          • Fade the screen to black
          • Teleport the player to that next transform
          • Fade up
    • If there's not more than one object:
      • Hide the hotspot to show there's nothing to click in this direction


That's the core idea. Means I don't have to track buttons per state, or drive it from a spreadsheet, or do any book-keeping; if there's a line of sight from one sphere to the next, you can click in that direction. The reason I test that the second object is a sphere is to allow me to put up blockers; I can create walls and such between the spheres, so if the ray test hits that, it stops the rest of the logic.

Blueprint code

Behold!

Pano blueprint all.jpg

White wires are events, they show how excecution flows. A handy feature is you can hit play in the editor, and click stuff, do things, and the wires will glow to show you how events are being triggered and control flow is being altered. Blue lines are object references, yellow are transforms, red are booleans, pink are strings, green are floats.

Breaking this down:

Begin play

Pano level start.jpg

Unreal uses events for most of its behaviour triggering. Pressing the 't' key is an event, clicking the mouse, tilting your gamepad, or in this case, the event is simply 'begin play'. The aim here is to snap the camera to the first sphere, so when play begins, I get a list of all the 'actors' that are static meshes (ie, all the spheres), then do a for each loop to iterate through them. In the loop I match their name to the one I'm after, in this case anything ending in "001". I then get the camera player manager, and the transform of the first sphere, and snap the player to that location.

What was interesting the first time was that there's several things that are identified as 'the player', took a few goes to work out the right one. There's the player controller, the player pawn, and the camera. Still getting my head around it all, but skimming a few QnA's on stackoverflow implies that a player controller is the 'brain' of the player, where the logic goes, all that stuff. The 'pawn' is the physical entity in the level, which you can think of as being puppeteered by the player controller. Another way to think of the distinction is if you were playing a first person shooter, a player could die and restart many times over; each time the pawn is disengaged and a new one is made elsewhere, but the player controller stays persistent.

In this case I link to the camera player manager. Why? Trial and error. I found curious behaviour when using the others, which implies more a failing of logic on my part than Unreal I think; if I used the controller then I'd inherit translation and not rotation. If I used the pawn, the reverse. Using the player camera worked as expected.

Pano attach blue hotspot.jpg

The extra bit here is to parent a sphere to the camera. This is an unlit, slightly transparent blue sphere, which in the default settings is parented with offset so its down the axis of the camera, about 500 units away. This is what I use for the hotspot indicator, and I toggle its visibility later on.

Game tick

Pano trace start end.jpg

So this is what happens on every frame update of the game, there's probably more efficient ways of doing this, but for this simple setup, its not too much of a problem.

First I grab the camera, get its position, get its forward vector, and construct another position along this vector, many units away. Its this sort of stuff that makes me wish blueprint had the equivalent of a vex wrangle. Still, works.

Pano debug trace values.jpg

This is a little example of printing stuff to the screen, and constructing strings. Again, the lack of a text field to just construct strings is irritating, but this works well enough. Note the unlabelled casting nodes; blueprint sets up a lot of that stuff for you when connecting almost but not quite similar things. Because blueprint networks are context free, there's an incredible amount of node types, too many to comfortably browse through by default. As such, the selection-aware preselect of the node menu is very handy. Drag from the output of a node, let go, and you get a reduced list that just pertains to that type of output. This also works for inputs, so you can drag out from an input, and get a similar reduced menu. And finally, you can connect an output of one node to an input of another, and if they can be connected via a type conversion, that'll be made for you too.

Project 360 latlong through a light

Latlong light sm.gif

Download plugin package: File:equirect_light.zip

A way to project a latlong/equirect image through a point light. Uses a material as a light function, and cheats all the theta/phi stuff with a 'light vector' node. Goes:

light_vector -> latlong_to_uv -> texture -> mask (as parameter) -> emissive_color

Using textures in lights assumes its an IES profile, ie, its black and white. As such you need to do the trick of splitting the texture into R G B, make 3 lights that are also R G B, and assign the 3 material instances to each light.

Use the funky new(ish) video texture stuff, find yourself a 360 video, plug it into the light material, you have yourself a 360 video player you can project over your scene. Fun!

Create a content plugin

Handy tip from funk lord Wyeth Johnson.

Say you wanna share some stuff (like the 360 light above), and want to wrap it up in a self contained package. You can create a plugin, move or copy the stuff over to the new content folder that's made, package it send. The person recieving it puts the plugin in their project, on start Unreal will detect the plugin and make it available.

  1. Go to edit -> plugins
  2. Create new plugin in the lower right
  3. Name it. If the 'create' button is grayed out, click on the main center section of the UI (away from where you set the name), the button should activate, click it.
  4. In the plugin browser, enable the folder view on the left, and use the view options in the lower right to ensure 'show plugin content' is enabled. You should now see a folder for your plugin.
  5. Drag and drop what you need into that folder (you'll be asked if you want to move or copy, take your pick), or create new stuff in there, whatever you need.
  6. Save the project! Until you do this, none of the assets you moved/copied/created exist.
  7. Back to the edit -> plugins menu, find your plugin (right at the bottom of the list), click 'package' in the lower right. It will ask you for a parent folder location, and make a self contained folder of stuff.
  8. Send that folder to someone else. They create a 'Plugins' folder in their project (alongside where the Content folder, .uproject file is located), put the folder in there, start Unreal. It will warn them a new plugin was detected, enable it, now it will be available in the content browser.

Look at the VR camera

Swish: https://answers.unrealengine.com/questions/490152/look-at-target-world-location-for-hmd-or-camera-po.html

'get player camera manager' will be hidden if you have the 'context only' option enabled, tricky that.

Parent things to the hand controllers

https://answers.unrealengine.com/questions/324995/get-hand-position-is-not-returning-correctly.html

use the cryptically named 'get hand controller and orientation', but also add the vrorigin from the vrpawn, or its offset by your physical height off the ground, dur.

General blueprint musings

Blueprints tend to belong with the content browser version of a 'thing', rather than the in-the-level copy of a thing. Daring to reference OO terms from my failed 1st year uni computer classes, blueprints stick with the object definition rather than the instance.

This also means that cos you're working in the object definition, you can't just get references to things in the level, as the blueprint isn't aware of them (again, the dumb analogy is that the blueprint 'lives' in the content browser, while stuff you're interested in is over in the world outliner).

So, if blueprint can't immediately get to things in the level, how can you get to things in the level? 2 ways: Functions and Variables.

Functions: Blueprint has quite a few functions that list things in the level, one I've used a lot is 'get all actors of type..'. This creates an array, which you can run through a for each loop to filter down to the thing(s) you need. It's implied of course that you don't want to be doing this loads, so ideally limited to a constructor script or a Begin Play, not on every game tick (though I recall reading somewhere that there are some highly optimised get-all-actors-of-blah functions that run pretty quickly).

Variable: Like promoting parameters on an HDA, you can create a variable in your blueprint, make it public, and it gets exposed as an extra control on your object in the level editor. Also like HDAs, variables can be a simple float slider, or an object reference. So you create a variable of type actor (or get super specific with all the fancy actor types like pawns and foo instance blah blah), make it 'settable' by clicking the eye icon next to the name, and then drag in a 'get' copy of that variable into your blueprint, treating it as if its the thing you're interested in. When you have your blueprint-logic thing in your level, when you select it you'll see a variable listed in it's details parameter. You can now select it and point it to another object in your level, and the script should work as expected.

In this gif I setup exactly that; I have a cube and a light, and I want the cube to be able to find the light. I create a variable in the cube blueprint, make it public, set its type to a point light actor, and 'get' it, and feed it to the print statement to report the name of the thing it found. After the script is compiled, the cube in the level editor has a new parameter which is an object slot. I point that at the light, play the level, and see that it finds the light as I expect.

Unreal blueprint variable get.gif