As computer hardware continues to advance, with software to match, we have entered an age where creating amazing-looking digital media is easier than ever before, even at an entry level. However, creating good design goes beyond just looks. It also must function well for the user, as well as for those down the pipeline who create and set up the functionality. Having a solid creative pipeline for creating assets is important in saving you time and frustration as well as develop a more interactive and playable asset.
When creating 3D assets, I break the process into four main parts: preplanning and research, the first pass, implementing and testing, and the final pass. This upfront work might seem like a lot of extra effort, but it saves time in the long run by identifying and resolving problems earlier in the pipeline.
Let’s dive in and see how these four steps can enhance your creative pipeline.
Understand What You Need to Make Before You Start to Make It
Novice digital artists often make the mistake of not taking the time to fully understand what they are going to make before they hop into their program of choice and start creating. The problem is that if you’re creating an asset for a game or commercial purpose, most likely some level of functionality and interactivity will need to be accounted for. By taking the time at the start to understand how the asset at hand is expected to work, you can break it down into components and piece it together as a whole. Otherwise, you run the risk of having to break apart an already completed asset and patch up the seams or, worse, start over.
Here are some tips to help you better understand what you need to make before jumping in to make it:
- Find plenty of references and study them. As obvious as it sounds, many digital artists don’t spend enough time finding quality references to influence their design. I always ask the client to provide references, if possible, to get a better idea of what they want. I also allow myself a reasonable amount of time to find and study references to help me better understand the subject matter. Believe it or not, Pinterest* is a great tool for finding references and creating reference boards on a per-project basis.
- Concept: quantity over quality. Normally you would think the opposite — quality over quantity — is true, but during the concept phase having numerous ideas gives you more options in selecting the best one, than does settling on one idea that is just okay. Make sure you have your references handy. A great practice is to take timed sprints of 25 minutes to an hour, spending at least 5 minutes per concept, but no more than 10. During this process, make sure to zoom out and focus on the overall form and not to get caught up in details, which you’ll do after you review the first few rounds of concepts.
- A goodmodel sheet removes a lot of the guesswork. A standard model sheet will have a front, side, and top view of the asset you plan to make, acting as a great guide for the modeling process. To get the most from your model sheets, preplan any functioning parts to see if they mechanically and aesthetically work in all views. This allows you to also see the various functioning parts, so you have a better idea of how many pieces your asset may need. If something seems off, you can address any issues before the modeling process, saving you time and frustration.
The benefits of taking adequate time to preplan will result in a more well thought out product, ultimately saving time for you and your teammates.
Now that we have a good understanding of what we want to make and a fancy model sheet as guide, let’s model it.
First Passes for First Impressions
The first pass is similar to the concept phase, in that we want to focus on the main forms and functionality. As mentioned before, the easy part of creating a 3D asset is making it look good, so at this stage we want to ensure our asset’s interactivity is center stage and developed properly. Once again, keeping it simple at this stage allows for more wiggle room if we need to address any issues after testing. Details can be added easily later, but having to work around them can become problematic and frustrating.
Here are some tips to speed up the modeling process as well as optimization:
- Set the scene scale before modeling. It's easy to want to start creating as soon as we open our program of choice, but we need to ensure that what we're making matches the specifications given. Although scaling up or down after the fact isn't that hard to do, not having to do it at all is much easier.
- Not every asset needs to start as a single mesh. It's much easier to combine meshes together and clean up loose ends rather than rack your brain on how a single mesh can be built outward. This is especially important for functioning parts, because having a separate mesh that can be altered on its own without affecting other meshes is easier to deal with.
- Mind your edge flow and density. Having a smooth mesh is appealing to the eye, but at this phase having a higher poly density increases the amount of detail we have to maintain, sometimes for even the smallest changes. Keep it simple for now, because we can always add extra subdivisions once we're happy with the form as a whole.
- For symmetrical pieces, focus on one side and mirror the appropriate axis. This approach guarantees that the mesh will be symmetrical, saving you a lot of time reviewing and cleaning up. Expect to do this multiple times as you develop the mesh to get a better sense of the object as a whole. If you end up with any gaps down the seams when mirroring, you can either sew the vertexes together with Merge To set to Center, or you can select all the vertices on the seam, move the pivot point to the central origin, and then scale them toward each other.
- If duplicating a mesh, UV it first. UV unwrapping is already time intensive, so why spend extra time when you can do it once and have it carry over to the rest. Although you can copy and paste UVs for duplicated meshes, sometimes you may end up with undesirable results, which requires extra time to fix.
After the mesh resembles a rough form of what we're aiming for, I recommend setting up some base materials with simple colors to mock up the details. If we’ve managed our edge flow well enough, we should have proximate areas for a base level of textures that we'll apply per faces. Doing this saves a lot of time because we’re not committing to the arduous task of perfecting our UV unwrapping, which will come during the polish phase. We’ll also have more control updating colors on a per-material basis when we test in-scene.
Now all we need to do is export our asset so we can test it in the engine to ensure it looks and functions as intended. However first we need to do some important prep work in order to avoid having to re-export and import multiple times.
Make sure to address the following details before exporting:
- Is your hierarchy concise and are your meshes properly named? With any asset that's going to have various functioning parts, you will most likely be dealing with multiple meshes. Taking the time to have a concise hierarchy will make sense of what meshes interact together or independently, and properly naming them will avoid confusion as to what each part in the hierarchy is.
- Are the origins of your points of interaction set up properly and locked in? Not every mesh you create will need its origin point at 0,0,0. This is especially true when you're working with multiple meshes and moving them about based on the hierarchy. So we want to be sure to set the pivot points to where they make sense and freeze the transformations when we have them where we want them. This will make it easier to manipulate any aspect of the asset in a scene.
- Are your materials set up and managed well? Try to avoid using default materials and shaders, because it will overload the project with duplicate materials in various places, causing confusion when you need to assign them in the editor. When dealing with any faces that may have been missed and left as default material, I recommend going to the Hypershade menu, and then holding down right-click to select all the faces with that material. If there aren't any, we're good to go. If there are, they are now selected and we can assign them to what we want them to be.
With our asset set up and prepped, we can export without having to make any major changes and re-exporting. When using Maya*, I recommend using the Game Exporter because its tool settings for exporting are easy to understand and adjust. It's also good practice to set the exporter to Export Selection for more control over what you're exporting and so you don't end up with stray meshes, materials, cameras, and so on. Once we export our asset, we can now test it in scene and see how it works and feels.
Time to Test Your Mettle and See How It Came Out
With all the preparations we took to ensure a solid design before jumping into modeling and making a rough version of the model focusing on functionality instead of minor details, it's time to see where we stand. We'll start by adding our exported model to the project in an appropriate folder. Metadata files are automatically created, as well as a folder with any materials we assigned. Because we chose to make multiple materials as opposed to a single UV layout/material to save time by not overcommitting to details, we will have to clean up this folder once we have the final version of our asset.
Now, let's drag our asset into the scene to review and test it to ensure it works and feels as intended.
Here's what we'll want to watch for:
- Is it in scale with the rest of the assets in the scene? This is the most obvious thing to check, and it will stand out the most if it's drastically off. If the scaling is off, it is important NOT to scale the in-scene asset. Doing this will not only have no effect on the scale of the base asset, but it will also cause frustration down the road because we’ll have to manually scale the asset each time it is brought into the scene. To adjust the scale in-engine, we want to do so via the asset's Import Settings under Scale Factor. However, I recommend noting the scaling difference and adjusting it when we make a final pass on our asset and then re-exporting, because any updates to scale factor are stored in the metadata file, which may change when reimporting, not retaining the scaling changes we make.
- Do any areas of the mesh not read as intended? Even though we made a stellar model sheet to work out any odd details before we began modeling, sometimes once the mesh is in-scene to scale among other assets, some areas and features may not read as well when in context. Rather than reviewing the asset in Scene View, we want to view it from Game View to get a better idea of how it will look to the user and avoid being overly critical of areas that may not even be noticeable when in use. Our goal is to achieve overall readability. If the asset doesn't quite seem to be what we intended, we need to note why and figure out ways to make corrections for our final pass.
- Are the pivot points where they need to be and zeroed out? We took the time before exporting to ensure our pivot points were where they needed to be and locked in, so now we want to ensure this carried over properly in-engine. Before pondering what might have gone wrong, check to make sure Tool Handle mode is set to Pivot and not Center. As we double-check each mesh within the asset's hierarchy, we also want to verify that the transforms are zeroed out as well. This way if any part accidentally gets moved out of place, anyone can zero it out and it'll be right where it needs to be.
Checking minor details like this will ensure things are done right the first time and let us progress more quickly to other tasks. That said, we can't always expect perfection on the first try, which is why it's important to keep things rough at first, and then refine them as we go along.
Ideally, with all the precautions we took to ensure high quality, minimal, if any, alterations will be necessary after we've reviewed our work, in which case we can get cracking at a final pass and make it nice and shiny.
Polish It up and Put a Bow on It
This is the moment we've been working toward. First, we need to address any shortcomings we noted when reviewing in-scene, keeping in mind the techniques we used during our first pass. After we have made any alterations to the base form, we can start cleaning up the mesh and adding those sweet deets we've wanted since the beginning.
Here are some pointers to keep in mind as we’re cleaning up and polishing:
- Be selective when adding subdivisions. When smoothing out a mesh, it’s best to avoid just clicking the Smooth Tool button on the entire mesh, because it doubles the number of polygons. Yes, it will look nice and smooth, but at the cost of performance in the long run. Instead I like to be more selective with the areas that I add subdivisions to. Start with larger areas and smooth the edges to see if that does the trick. If not, resort to selecting faces of the area that needs smoothing and use the smooth tool then, because it allows better control of poly count.
- Bevel hard edges to make them seem less sharp. This is similar to adding subdivisions to faces, but instead you’re adding to edges. This makes them appear softer where softening the edge doesn't quite do the trick (usually edges with faces at 135 degrees or less). It also has a nice effect when baking ambient occlusion. As with smoothing faces, we want to be selective when choosing which edges to bevel so as not to drastically increase poly count.
- Be mindful of edge flow and density. There's no need to have multiple edge loops on large flat surfaces. Minimize poly density when you’re able to as long as it does not disrupt edge flow. This will make UV unwrapping easier as well.
- Don’t be afraid of triangles. There’s a common misconception with 3D modelers of various skill levels that a mesh needs to consist primarily of quads. Although we don’t want anything higher than a quad, there’s nothing wrong with tris. In fact, the majority of game engines, such as Unity3D* and Unreal*, will convert quads into tris for optimization.
- Delete deformer history from time to time. The more we clean up our mesh, the more deformer history is being saved, crowding our attribute editor and possibly affecting the use of other deformer tools. When we're happy with the results after using deformers such as bevel and smooth, we can delete deformer history by selecting the mesh and then going to Edit>Delete All by Type>History or by pressing Alt+Shft+D. This will ensure a clean attribute editor and prevent other deforming tools from not performing properly down the line.
Although we normally want to aim for a low poly count for our assets, when creating assets for virtual reality (VR), we don’t have that luxury. Keep in mind that because the user can get up close and personal with many of the assets in a VR environment, hard edges can look daunting thus requiring slightly higher poly counts.
Now that our mesh is cleaned up and polished, it's time to move on to one of the more tedious parts of 3D modeling: UV unwrapping. Here are a few pointers to make the most of your UVs:
- Start by planar mapping. Automatic UV unwrapping may seem like a good idea, and for simple meshes it can be, but for more complex meshes it ends up slicing UVs into smaller components than you want, and then you have to spend time stitching them together. On the other hand, planar mapping projects the UVs on a plane silhouetting the mesh similar to a top, side, or front view, and makes a single shell that you can break into components of your choosing. I find it best to choose the plane with the largest surface area when planar mapping.
- Cut the shell into manageable sections. After planar mapping, you can create seams that will break the shell into smaller pieces by selecting edges and using the Cut UV tool. This makes it easier to manage sections as opposed to trying to unfold a larger shell and having to spend time making minor adjustments. You can always sew shells together for fewer seams after the fact, saving time and frustration.
- Utilize UV image and texture tools for less confusion. UVs can be confusing at times, because a shell may look the way you want but will be flipped, giving you undesired results. To ensure you know which way your UVs are facing, enable the Shade UVs tool (blue=normal, red=reversed). Another tool worth enabling is the Texture Borders toggle. This clearly defines the edges of your UV shells in the UV editor, as well as on the mesh in the main scene, making it easier to see where your UV seams are.
- Areas that will have more details should have a larger UV space. Although it's nice to have all the UVs to scale with each other, often that will be areas in which we want more detail than in others. By having the areas that require more detail utilizing more UV space, we can ensure those sections are clear.
- Think of UV unwrapping as putting together a puzzle can make the process seem more like a challenge and less like a dreaded chore.
Once we have our UVs laid out and reduced to as few materials as possible, we can export our mesh (using the same prep guidelines from the rough phase) and bring it into Substance Painter*.
Substance Painter is a great tool, because it gives you 3D navigation tools similar to that of many 3D modeling programs, layer and brush customizations of digital art programs, and the ability to paint on either the UV layout or mesh itself. I recommend starting with a few fill layers of the material of your choice to recreate the base materials from the rough-out phase. By using layer masks, we can quickly add or remove our selected materials per UV or UV shells. Custom brushes with height settings can add details such as mud, scratches, fabrics, and so on that can be baked into normal maps, adding a lot of life with a few simple strokes.
Before exporting our textures and materials, we need to do some prep work in order to get the most out of what Substance Painter has to offer:
- Increase or decrease resolution. One of the advantages of Substance Painter is the quality at which it can increase or decrease resolution and go back again without forfeiting details. Not all assets need to be at a high resolution. If your asset reads well with little to no noise, pixilation, or distortion when at a lower resolution, change it. With Substance Painter you can always go back up in resolution without losing the original amount of detail. If your asset is going to be used in VR, it’s best to increase resolution to ensure all details are as crisp as possible.
- Bake maps. This will make the most of any height details you create by baking them to a normal map, and ambient occlusion maps add subtle shadows that give assets that little extra pop to boost their readability. When baking maps, I usually set their resolution down a step from my material and texture maps as they tend to be subtler.
- Choose an export setting based on the designated engine to be used. Another great feature of Substance Painter is the export presets based on the various engines that can be used. This helps ensure you don’t get any strange effects when adding your maps to the asset in the engine.
Watch exporting in painter video
We did it! We took the time to plan out our asset to its fullest, roughed out the major forms with functionality in mind, tested our asset in-engine, and detailed and cleaned up our asset in a final pass. Now we can hand off our hard work with the confidence that not only does our asset look great, but it’s also set up in a way that works efficiently and is easy to understand, so that someone down the pipeline can add interactivity and playability. And with all the time we saved and frustration we avoided, our level of creativity remains high for the next project.