Skip to content

Latest commit

 

History

History
875 lines (820 loc) · 113 KB

Interview Questions.md

File metadata and controls

875 lines (820 loc) · 113 KB

How to use this doc

This doc is a reference point for folks interviewing potential Tech Artists. It contains a collection of interview questions along with notes to help interviewers learn more during their limited time with a candidate. The goal is not to create some kind of perfect uber-interview script or an "SAT for TAs." The aim rather is to help provide some routinized way of getting value from our limited time with candidates.

Philosophy

A good question is not about “passing” or “failing.” "Gotcha" questions intended to catch out a candidate in a mistake are a waste of everybody's time; this is not a game where we are forcing candidates to beat levels, its an effort to size up a candidate's potential contributions in a a very small amount of time. So, the questions in the doc skew heavily towards open-ended story questions. These give the candidate an opportunity to illustrate their core values and technical competencies in action. A story which really illustrates, say, good collaboration skills is better than asking a question like "Do you have good collaboration skills?" and being told "yes!" Thus, questions are intended to elicit answers which give the interviewer room to explore.

Please remember this is not a one-size-fits-all script that must be followed to the letter! There are far more questions here than any candidate ought to have to face, and every role comes with a unique balance of technical and collaborative needs. The goal here is to help a hiring team to provide some structure and regularity around their interviews: a standardized set of questions is a good tool for combating unconscious biases and ensuring that each candidate gets an equitable opportunity to show us what they are capable of. But "structure" and "regularity" should not make it impossible to have a real conversation with the candidates: we are hiring people, not quizlets.

Typically, the hiring manager should select a set of questions based on the needs of a given position assign them to different interviewers for consistency's sake. Interviewers should not obsess over following the precise wording of the questions; being present in the room with the candidate is infinitely more important than robotically sticking to the wording here. This doc does not include purely informational questions like “How big was the team on that project?” or “Did that ship on both Xbox and PS?” You’ll inevitably ask many of these questions during a session both for clarification and to move the conversation along. That's great, it's all part of the meta-goal of having a good conversation rather than running an interrogation. Just remember to keep an eye on the time and make an effort to stick with the general interview agenda so that all candidates have a chance to speak to all the most important concerns for the role.

Most of the questions are actually written as strings of questions – an initial setup followed by follow-ups. You won’t want to throw the whole string at the candidate in one go; the follow-ups might or might not be needed depending on the nature of the initial answer. Many lines of questioning will hit dead ends because the candidate has no relevant experiences or insights to share – it’s a good idea to have a few spare questions on deck so that you can quickly move on when the candidate just doesn’t have much to say on a particular topic.

Question formats

There are four basic question formats:

  • Story-formatted questions (“tell me about a time when…”) are better than simple yes-no questions (“have you used …?”). Stories give the candidate a lot of room to show their values and mindsets.
  • Questions reflecting best or worst experiences (“What was the worst production problem you ever had to solve?”) can give a good window into the candidate’s approach to work in general.
  • Self-description questions can help you explore how well the candidate understands their own place in the field. Many people will under-sell their own skills to avoid appearing too vain, while others will try very hard to overplay their real experience. Both of those types of answers can be instructive.
  • Scenario-based questions in which the candidate will interactively work to sketch out a design or solve a problem. These questions are a great way to see how well a candidate deals with open-ended situations and imperfect information. However it’s important that these questions are exploratory and interactive – not the equivalent of whiteboard coding. We want to see how candidates prepare themselves to tackle hard problems, not to “flunk” them for not having the exact mix of experience and opinions we’d like.

Currently the doc does not have a lot of scenario questions in the domain knowledge sections – if you have a scenario you’ve used successfully in some of the technical areas, please consider sharing it!

Different interviewers will probably gravitate towards different question styles, which is fine. However, it’s a good idea for interview teams to make sure a broad range of question types is included since different candidates will respond to different formats.

Interviewer hints

All of the questions in the doc will include an associated set of hints to help interviewers parse the answers and follow up. These hints are there to highlight the kind of information that we really care about. The hints can also be useful for steering the conversation in a useful direction: the kinds of open-ended questions we prefer sometimes let a candidate get lost in irrelevant details that are not very informative for us. The hints associated with the questions here can be helpful to interviewers who need to nudge a discussion back in a more productive direction. It’s very useful to note the pattern in the way a candidate answers open-ended questions: are they good at retaining the thread of a conversation, or do they wander? Are they very vague, or do they load their answers with irrelevancies? Are they good at making sure you follow the gist of their answers?

The hints for each question may also indicate:

  • Is this question better for junior or senior candidates?
  • What values or leadership skills does this question likely relate to?
  • For scenario based questions, the hints usually include extra information that the candidate will want to give a good answer. The scenario format helps highlight the way candidates deal with uncertainty and how they find the information they need.

Question Areas

The questions are divided thematically. You'll note that the traditional traditional knowledge-based questions are lumped together into [[Domain Knowledge]], which is is subdivided more finely.

  • [[Collaboration and communication]]: how a candidate works with immediate teammates, with "customers", and with other disciplines
  • [[Problem Solving]]: how to approach messy, open-ended techart problems
  • [[Production Wisdom]]: dealing with production realities effectively
  • [[Industry]]: breadth of experience in the field, but also important for understanding the candidate's approach to learning and growth
  • [[UI/UX and Design]]: candidate's instincts for tool and workflow design
  • [[Leadership]]: leadership style and approach to management
  • [[Domain knowledge]]: traditional "how-to" questions.

If you’re interested in contributing questions to the bank, be sure to place your questions in relevant categories. More contributions are welcome! Contact Steve Theodore if you’re interested in helping maintain this document over the long run.


Questions

  1. Collaboration and Communication These questions concern the way the candidate fits into a team environment – how well they work with different disciplines, their approach to communication, and their ability to solve problems collaboratively.
    Many of these questions relate very directly to the different Unity values and leadership skills, so they offer good starting points for different values-based parts of the interview loop.
  2. How do you go about fitting in when you joined a new company or got a new role? How do you figure out how the team and the project really work?
    • This is a good one to customize based on your own experience and their resume – most of us have had that feeling of feeling stranded or overwhelmed by a new environment. It can be a good way to connect with the candidate, so it’s often a good early question.
    • Does the candidate’s response illustrate self-teaching and adaptability? (#InItTogether)
    • Do they start with technical problems or with team relationships? (#BestIdeasWin, #BuildingRelationships)
  3. What is your favorite example of a work environment where the collaboration was really effective? What made that experience so special?
    • This question is better for senior candidates, since beginners may not have much to go on besides a student project. “How did your student project team work together? What worked best about it?” is a similar question for interns..
    • Does the candidate focus on solid relationships, or on technical factors?
    • Does their answer include examples of #BuildingRelationships (their own or on the part of people they respected)?
    • Does it illustrate good alignment skills?
    • This can also lead to follow-up questions relating to inclusivity and diverse perspectives.
  4. Tell me about a time when you had a conflict with one or several team members about the way to proceed on a project. How did you go about resolving the issue? What did you learn?
    • Look for evidence of #DrivingForAlignment, #FierceFeedback and #InItTogether.
    • When asking conflict-related questions, it’s important to see if the candidate thinks the fault always exclusively belongs to other people.
  5. Do you have an example of a time when a coworker seemed very satisfied with the quality of their work, but you had to point out that the quality was actually low? How did you go about telling them? How did you feel about it? What was the outcome?
    • This is obviously about #FierceFeedback, but what secondary values are involved?
    • Is this a person who approaches these situations intellectually, in a #BestIdeasWin spirit?
    • Is this someone who uses this kind of awkward situation as a teachable moment in an #InItTogether spirit?
  6. Which project that you’ve worked on had the hardest time finding a common vision or goal? What made it so hard for the team to agree? How did you help resolve the issue? What did you learn?
    • This question could reflect on #DrivingForAlignment, on #InItTogether, or #BestIdeasWin – all alone or in combination.
    • If the candidate’s first answer is about working with production artists, it might be worth asking a followup about working with engineers, or vice-versa. What does the difference between the answers tell about their approach to relatively more or less powerful collaborators?
    • This is a tweaked version of a question from the Leadership question bank – it’s intended to see not just what the candidate did, but also how they understood the misalignments that led to the problem in the first place.
  7. Tell me about a time when a team member was expecting something from you, but you were unable to deliver on time. How did you go about it? Did you take any steps to resolve the issue? What was the outcome?
    • This one is a good index of both professionalism and of realism. How well does the candidate accept responsibility for a failure? Are they fairly objective about their own part in the problem? How did they deal with the negative consequences?
    • This can be a good opportunity to see if the candidate recognizes & values an #InItTogether approach from teammates.
    • It can be interesting to pair this with an example of the opposite situation, when it was another party who dropped the ball. Does the candidate apply similar standards to their own behavior? Do both answers illustrate an #InItTogether attitude?
  8. What do you think is your finest moment in supporting artists on your team? What’s the biggest hurdle you helped them overcome?
    • This one only really works for people with a studio background doing artist support work
    • Is the candidate somebody who focuses on technical help (“I wrote a tool to…”) on education (“I held a series of brown-bag tutorials…”) or on relationship building (“Once I got them to trust me, I…”)
    • Does the way the candidate refers to colleagues indicate an inclusive mindset?
  9. What was your biggest frustration in supporting artists? How did you work through the problem?
    • For people with a studio background doing artist support work
    • Does the candidate’s answer show a #UsersFirst mindset (ie, they were frustrated on behalf of users)? Or is it more of a perfectionist mindset (the team picked a wrong approach)
    • Did the candidate try to fix the problem with tech, with social engineering, or by finding a way around it?
    • Did they help to create alignment toward a new workflow or outcome in the end? Did they show leadership in making sure the problem did not recur?
    • Does this show #InItTogether values? Or was this more case where advocacy and persistence and #BestIdeasWin was the way forward?
  10. Tell me about a situation where you think a bias had a negative impact on some of your teammates or on the product you were working on. What steps did you take? What did you learn from it?
    • Ideally this is a question that you can insert into follow-up on another answer where team interactions are at play. (“Do you think there were issues of bias at play in that team…?”)
    • Does the answer line up well with Unity ERO principles? Followup discussion is a good place to talk about Unity’s approach to inclusion.
    • Don’t forget: biases come in lots of forms! Problems confronted in Europe might be quite different from those in the US, but the ERO framework applies to many kinds of bias.
  11. How would you describe the difference between production artists and tech artists?
  • This question throws light on the candidate’s views of both disciplines.
    • Are they respectful and appreciative of the differences, or do they tend to be patronizing towards production artists? Is their attitude #InItTogether
    • Does their view of tech art include a #UsersFirst perspective? Are they in it to support other artists, or just to indulge their own tech interests?
  • Have they been on both sides of the fence? Do they appreciate the pressures and obligations of their artist colleagues?
  1. Problem Solving These questions investigate the candidate’s approach to roadblocks: are they good at self-teaching, chasing down information, and making decisions – or do they need to be told what to do? Can they analyze and solve complex problems? How do they deal with ambiguity and time pressure?

  2. What do you think of as the trickiest pipeline you ever had to create or maintain? How did you go about designing it? How did you iterate with the art to team to improve the workflow?

    • “Trickiest” here can mean many things – it could be impossible performance requirements or target technology that was unstable, but it could also be a team where politics made it hard to adopt best practices, or a legacy tech that required strange workarounds. The choice of “trickiest” can give an insight into the candidate’s outlook: are the hardest problems technical or social? Are the solutions coming from new tech, new workflows, or brute force?
    • If the negotiations that led to the final outcome were complex, what insight does this give you to the candidate’s approach to collaboration?
    • Does the discussion show a good understanding of the underlying tech? Or is the solution messy because nobody on the team was able to analyze the problem successfully?
    • Is the solution a radical #GoBold kind of moonshot?
    • Did the candidate fight to make things user friendly in a #UsersFirst spirit?
    • Did they take a lot of thankless tasks to be #InItTogether with the artists?
      • Or, did they just demand that the artists followed orders?
  3. What’s the most obscure or difficult problem you ever had to debug? Tell me how you figured it out.

    • This doesn’t only mean code debugging – a lighting artist trying to understand why some combination of settings produces bad results is “debugging” in this sense.
    • What level of technical insight does the candidate show as they explain the problem? This can be a good window into their technical experience.
    • Did they teach themselves what they needed to know, or did they rely on finding the right colleague?
    • Does the candidate prefer to handle problems solo, or to assemble a team to find answers?
    • If resolving the issue involves team conflicts, you may be able to add a followup question from the “Collaboration” section to get at the candidate’s approach to teamwork.
    • Potentially, you could ask this as “Out of all the problems you’ve had to debug, which one are you proudest of figuring out?”
  4. Tell me about a time when you were given a task that was outside your comfort zone. How did you react? What steps did you take to deal with the situation?

    • This can be a good task for younger TAs, since it’s not really specific to industry experience. For a young person an answer about a class where everybody else had better preparation can be a valid answer.
    • Dealing with this kind of situation is a great test case for #GoBold – can the candidate thrive with uncertainty?
    • Answers could skew technical or be mostly interpersonal; the choice might be part of a larger pattern in the candidate’s approach to teamwork.
    • Does the candidate find the information they need to survive in a tough situation? Finding an engineer to teach you or reading the docs or googling your way into competence are all valid strategies for a TA who is able to show self-help in a tight spot. Is this person able to rise to the challenge of ambiguity and imperfect information?
    • Finding a collaborator to help with the hard part can reflect on #BuildingRelationships
    • Self-teaching for tech issues can reflect on #BestIdeasWin
  5. Have you ever been involved with a project that really fell short of its potential or just plain failed? What did you learn from that failure?

    • This question is not great for younger TAs, since they may not have enough experience to answer and may be very self conscious about being associated with “failure.”
    • It’s instructive to see what people focus on in their postmortem analysis: Are they able to look at it dispassionately or are they prone to blaming themselves? Others? Fate?
    • Do follow–up questions reveal a candidate who is able to learn from missteps? The best test for resilience is whether the candidate learns and grows after hitting a roadblock.
    • Another good follow up is “if you knew then what you know now, what would you do differently?” This is particularly useful for senior candidates, since it can let them show growth over the course of their own career.
  6. What is your favorite crazy hack solution to a production problem?

    • The emphasis here is on hack – this is not “what is your proudest technology moment?” it’s more like “when were you able to pull something off that by rights should have been impossible.” The emphasis is on cleverness and mental agility, not pure technical skill.
    • Sharing a hacky story of your own can be a good icebreaker when transitioning into a technical stretch of the interview – it’s less threatening than a serious tech question and helps lighten the mood.
    • This can be a small-scale example of #GoBold and #BestIdeasWin – sometimes those best ideas are competing against “No idea at all”
    • One potential red flag here is a hack that was slipped into a project illicitly – does the candidate take production discipline and release management too lightly.
    • For younger candidates this can provide a useful indirect view of their technical foundations – is the hack there because it is creative problem solving, or because they don’t know the right way to achieve their goals?
  7. Have you ever worked with somebody who had a lot of trouble unblocking themself? Were you able to help them become more self-supporting? If so, how?

    • This is not a great question for younger TAs who have not worked with a lot of people.
    • The surface value of this question is how it shows the candidate’s ability to build relationships and communicate with teammates who may have a very different outlook and skills set – it relates to #BuildingRelationships and #DrivingForAlignment and, maybe, to #FierceFeedback
    • Indirectly this question is good for showing how the candidate approaches ambiguity: do they have empathy for people who don’t “get it?” Or are they impatient with people who don’t share their knowledge and skills?
    • Does the way they approached their colleague indicate a good #InItTogether outlook?
    • Does the story indicate a willingness to try different approaches to reach different audiences?
  8. Production Wisdom These questions help to gauge the candidate’s approach to content production: are they used to a highly structured process with a rigid pipeline, or are they coming from a more improvisational space where anything goes? Are they used to spotting and eliminating potential bottlenecks? Are they inclined to fix problems using technology or using rules and processes?
    In this space it’s important to account for differences in the scale and social setting of different projects: the best practices for a 300-person movie production team will be quite different from the rules that fit a 20 person indie game studio. This list doesn’t include informational questions like “how big was your team” or “what’s the biggest production you worked on” – if you can’t glean those from the candidate’s resume you’ll probably want to get a sense of them early to contextualize other answers. Most of the questions don’t work very well for people who have never worked in production professionally.

  9. Out of all the productions you’ve participated in, which went the most smoothly? What made it easy to get out the door? What aspects of that production do you try to remember when approaching other problems today?

    • The important thing is how well the production process went – not the ultimate success of the project. An 18-month crunch that scored an 85 metacritic but ended several marriages on the team is not “most smoothly” for purposes of this question – while a small team hitting deadlines and making good content for a game that disappeared into the lower tiers of Steam can still teach us a lot here.
    • We’re fishing for the candidate’s idea of what a smoothly functioning project looks like: is it a good team that collaborates well? Is it a solid pipeline that stays out of the way of creatives? Is it well managed technical risk, or tech bets that paid off well? What does the answer say about the candidate’s ideal environment?
    • This is not a great questions for juniors or students who won’t have a lot of grounds for comparison
    • How much did the candidate contribute to making things go smoothly? Do they take all the credit, or give props to colleagues in the spirit of #InItTogether?
    • Does the production actually sound like it went well to you? Is the candidate perhaps missing some best practices or relevant tech knowledge that could have made things go better?
  10. What part of a typical production cycle do you find the most enjoyable? Why? What part of production do you like the least?

    • Does the candidate enjoy dreaming up new tools, processes, or visual targets, or do they prefer getting things set up cleanly and running a tight ship?
      • A tech-focused person may focus on innovation or on bulletproofing.
      • A process-oriented person might focus on setting standards and best practices during pre-production
      • A craft-oriented person might prefer the last stages when things are mostly working and its possible to focus on perfecting the look
      • A service oriented person may enjoy the camaraderie and #InItTogether spirit of the production crunch.
    • Do they enjoy the collaborative parts of the process (brainstorming in pre-production, helping artists during the ship cycle)?
    • Overall, this could swing in a #BestIdeasWin / #GoBold or a #InItTogether direction.
  11. Tell me about a production that really went off the rails, and how you think it could have been saved. What did you do to try to make it better? With the benefit of hindsight, what could you have done differently that would have gotten to a better outcome?

    • A good follow up to the question about the production cycle, with a similar range of options – does the candidate see success and failure from a technological perspective? Is the problem in team relationships? Should there have been a more rigid set of processes? Is it bias or unfair treatment?
    • The “how could you save this” is a chance to see if the candidate has something insightful to say beyond “we should do it better”. Do they see systemic issues that could be addressed in the tech, the culture, or in the way the tech and the culture interact (the latter is a good marker for insight).
    • While getting the details, look to see if the candidate engaged proactively to help fix things (good!) or if they sat on the sidelines and complained (bad).
    • Did they offer #FierceFeedback?
    • Did they handle the problems with an #InItTogether spirit?
  12. In your experience, what kind of pipelines generate the most friction? What makes this particular kind of content so problematic? If you had the time and resources, what would you do to fix them?

    • This question might benefit from targeting the candidate’s specialty field – ie, an environment person will talk about collaborating on a world art pipeline, not character workflows. There’s no “right answer,” this is intended to capture how well the candidate thinks about data flows in production. Some key things that indicate production savvy:
      • Does the candidate differentiate between different operations that are easy to iterate on (ex: tweaking a bitmap) and those where changes are costly (ex: changing a character’s skeletal topology)?
      • Does the candidate have smart things to say about when and where to prototype, graybox, or validate content? How would they mitigate the risks of bad upstream decisions?
      • Are they overly optimistic about the ability to predict problems?
      • Are they good at spotting areas where work can be parallelized?
    • Does the candidate have a realistic understanding of the creative process? Does their point of view leave room for changes of direction and exploration?
    • Do they recognize the problems that come from resource contention, for example from having a dozen artists constantly waiting on one un-mergeable binary master file?
    • How do they accommodate the needs of diverse contributors to a project?
    • If they have a proposed technical fix, is it feasible? Is it innovative?
    • If they have a proposed organizational or process fix, is it pragmatic and achievable? Is it #InItTogether?
    • Do their suggestions come from a #UsersFirst perspective?
  13. PIPELINE DESIGN SCENARIO

    • This is an interactive exercise in which we ask the candidate to provide a high level design for a content pipeline. We provide a basic sketch of requirements and resources and walk through the process interactively over about 20 minutes of real time. We have extra information available but only if the candidate asks for it. The basic structure is in 4 parts
      • Initial spec with a problem and a target
      • Iterate with the candidate as they ask questions and flesh out an initial approach.
      • A preplanned change to the initial spec – ideally, one that puts stress on their initial design.
      • Iteratively adapt to the change, again with possible Q&A as they gather requirements
      • Short retro (2-3 minutes) on how they would approach another job for the same “client”
    • The point of the exercise is really to see how well the candidate copes with a limited information situation.
  14. Industry Knowledge These questions help us gauge how well the candidate is acquainted with recent developments in the industry. Their interest in new technologies and changes in the production landscape is important for understanding how well they’ll be able to help Unity innovate and stay relevant. Some of these questions are purely informational and don’t need much explanation. For junior candidates we don’t expect much in-depth knowledge; for seniors, a lot of the value comes from their insights into relatives strengths and weaknesses – less “Is Unreal better than Unity?” and more “what kinds of situations would make you choose Unreal of Unity or vice-versa? It’s a good idea to let candidates know it’s alright to think Unity is less than perfect – we know there’s room for improvement and making things better is part of the job. If the position is not strongly linked to prior Unity experience, make sure to emphasize that so the candidate doesn’t waste time on apologizing for lack of Unity skills. You can encourage them to ask about how Unity approaches problems to provide a point of comparison to tech they do know.

  15. What realtime engines are you familiar with? What kind of experience do you have with the different engines?

  16. What would you say are the strengths and weaknesses of the different engines? If you were doing a personal project, which one would you pick?

    • Does the candidate focus on UI/UX concerns? On raw performance? On tools and pipeline? Or…?
    • How nuanced and balanced is their view of the strengths and weaknesses? Are they reacting to experience or to marketing videos? Do they see technical and workflow tradeoffs as design choices, or just as “good” and “bad”?
    • Are they emotionally hung up on just one product? Are they self-aware about that?
    • The personal project question can also be very revealing – it can show what they are interested in as creators and/or technologists.
      • Is it a quest for a very specific visual effect?
      • Is it essentially a tech demo?
      • Is it creative or exploratory? Does it show out-of-the-box thinking
      • Is their hypothetical passion project related to the job we’re hiring for?
  17. If you could change only one thing about your favorite engine, what would it be? Would that change if this was a business decision instead of a personal one?

    • Does the answer focus on usability / workflows? That can be a #UsersFirst discussion, or it could just be “I hated this and I want it fixed even if it’s trivial”.
    • On technical or graphic sophistication? That can be a #GoBold / #BestIdeasWin conversation… or it could be a generic “pretty feature” wishlist item.
    • Overall, do they show a perspective on the difference between personal preferences and what’s best for the user community (or the business)?
    • How does their answer relate to the position we’re hiring for?
  18. Does your preferred engine have a distinctive look or feel, even when used across a diverse array of projects? If so, what creates that impression?

    • What do they reference in their explanation?
      • Lighting models, like Frostbite’s dynamic lighting?
      • Antialiasing strategies, like Unreal’s default heavy TAA?
      • Content organization imposed by engine limits (like, the difficulty of doing an outdoor game in Source, or doing a big streaming game in vanilla Unity)
      • Pipeline imposed limits (like, levels coming directly from a DCC instead of an iterative game editor environment)
      • Animation features, like runtime retargeting or runtime IK.
    • Does their discussion indicate broad familiarity with the common problems across engines and productions?
    • Are they able to articulate the technical reasoning behind the perceived difference (= more senior), is it a gut feeling (=more junior)?
  19. UI/UX and Design These questions are for TAs who will be focused on creating or improving tools. They help gauge how candidate takes user needs into account, how they handle user feedback, and their commitment to #UsersFirst workflows.

  20. Describe your process when designing art tools and workflows.

    • Does the candidate have a structured approach to usability (= senior) or are they merely reacting to problems and making tools on the fly (= more junior)?
    • Do they talk to artists one at a time, or in groups?
    • Are they interested in user input? Do they take user input unfiltered, or do they work mostly without user feedback?
    • Do they combine respect for user wishes with the ability to push back against unreasonable or unrealistic requests?
  21. What are the most important parts of user experience in tools development? What do you think art tools frequently get wrong?

    • This is mostly an opener to get at the candidate’s values in UX development.
    • Are they self-aware about the difference between UX (user experience) and UI (user interface)? More senior candidates will generally be able to make the distinction clear, more junior ones often think that the graphic design of the UI is the only thing that matters.
    • What values does the choice of “most important” represent? Is the attitude more user-centric (#UsersFirst) or is it tech-centric – which could be #GoBold for an innovative approach, or maybe just not be very user-friendly at all.
    • Does the candidate focus on user convenience? On long term maintenance costs? On technical correctness?
  22. Tell me about a time when you had to support beginners and experienced users with the same tooling. How did you make this work? What kinds of design choices help support a broad range of users at different levels of expertise?

    • Does the candidate understand how to progressively disclose information for more advanced users?
    • Does the candidate have good insights into where a beginner user needs help and where they have to work with external constraints?
    • Does the candidate use UI and workflow to help educate users into higher skill levels?
  23. What’s the best workflow you ever helped to create? Why do you think it’s the most effective?

    • What values are shown in their choice of “best”? Are they proudest of solving a technical problem? Of satisfying a user’s needs? Of making a pretty, slick piece of software?
  24. Tell me about a time when your instincts about how a tool or workflow were not shared by the artists you worked with. How did you resolve the disagreement? What do you think you and the artists learned from the clash?

    • What kind of perspective does the candidate have towards users: is it respectful or patronizing?
    • Does the candidate distinguish between the popularity of a tool (say, a minor improvement that removes an irritating but trivial hiccup) and its importance (a workflow change that saves a lot of time, increases quality, or prevents problems on a large scale).
    • Good window onto #DrivingForAlignment and #InItTogether
    • A real test case for the practical value of #UsersFirst
  25. From a user experience perspective, what’s the best art tool you’ve worked with? If you could import some concepts from that tool into Unity, what would they be?

    • Are they able to separate out good design from good technical capacities? Houdini, for example, is extremely powerful but a bit of a mess on the design front. Do they focus on information architecture (menu design, consistent design language) or merely on features?
    • What user outcomes does the candidate prioritize: Technical power (ex: Houdin)? Approachability (Max, Blender)? Ergonomics for pro users (ZBrush)? Extensibility (Maya)?
    • Do they see pros and cons in their favorite tool?
  26. Management This section is, naturally, for manager candidates. Only about half of a manager interview will be “unique” to managers – collaboration and team behaviors from section one should take up maybe the first 5-10 minutes of an interview; domain, tech and industry questions should account for another 15-20 minutes, leaving 20-30 minutes for management specific questions. Management questions are even more story-based than other areas. Here we are trying to find people who know how to balance accountability for results with a positive, supportive work environment (which includes both reasonable work-life balance and good team behaviors)

    1. How do you balance your identity as a tech artist and your identity as a manager? What challenges come with spending more time on people and less time on code or content?
    • How does the candidate view leadership transition - would they really rather be an IC?
    • If they want to manage, is it because they enjoy empowering people? Or because they equate a manager title with power?
    • Are they working to stay current on the tech part of their identity?
    • An easy warmup Q for starting in on management issues
    1. What part of being a manager do you like the least?
    • This might speak to IC / Manager tension, but it could also reflect attitudes toward leadership. Does the candidate hate delivering feedback? Or do they hate having to manage time and money constraints? Do they dislike bureaucracy? Or are they averse to process.
    1. … and what is the best part of being a manager?
    • Does this reflect Unity values, in particular does it have an #InItTogether or #BestIdeasWin mindset?
    • Or is this a person who is extremely product focused and loves to ship thighs?
    1. What’s the best thing one of your managers ever did for you?
    • What values come out of “best”? Does the candidate see this as personal support, as education, as empowerment?
    • Do they explicitly intend to be like that great manager?
    • Did they learn good lessons about coaching, empathy, or navigating business?
    1. Tell me about your coaching experience – how do you help your employees grow? How do you adapt to employees with different learning styles?
    • Positives: flexibility, accommodates a wide range of learning styles., willingness to learn in order to teach
    • Downers: not interested, “it gets in the way”, trying to push people down paths they really don’t want or need.
    1. How much of your work as a manager is devoted to sharing information? Do you find it a challenge to keep your team informed about what the rest of the company’s up to, and to keep other parts of the company up to date on your team’s work? How do you handle information flow?
    • Ideally the candidate is excited about making sure that information is flowing freely – information hoarding is a red flag here.
    • Do they have experience busting silos?
    • Are they thoughtful about how they balance process-oriented solutions (regular newsletters, status meetings, show-and-tells) with the need to let people focus on work?
    1. Have you ever had to deal with a “talented jerk” as a manager or a teammate? How did you handle that? If you had to approach the same situation again today, what would you do differently?
    • How do you balance #BestIdeasWin and #InItTogether?
    • Was #FierceFeeback given?
    • Is the candidate introspective about how they could improve their own performance in future? Or do they place 100% of the blame on the other person?
    1. Out of all the different methodologies out there – Agile, Scrum, Waterfall, Extreme – what is your preferred method for getting things done, and why?
    • The “why” is really the key question here, since not everybody will have experienced more than one
    1. You may get interesting but not super relevant stuff about “Agile” vs “Scrum”
    • Good things to look for:
    1. Pragmatism
    2. Flexibility
    3. Somebody who ships… but with sustainable work practices, not a deathmarch mentality
    4. Wants a plan, but recognizes plans change
    • Red flags
    1. Impatient with process of any kind
    2. Allergic to planning
    3. What makes you want to be a manager?
    • This q is mostly relevant to junior candidates or people who have never managed before
    • Key things to look for:
    1. Do they see management as synonymous with power?
    2. Are they heading into management or more interested in getting out of production?
    3. Is this an experiment or part of a thought-out transition plan?
    • For juniors, a good follow up is “What is there about management that you don’t think you know?”, if the interviewer is a manager already.
    1. What was the hardest thing you ever had to do as a manager?
    • The stories here can vary a lot – firing or laying people off, for example, is traumatic but you may hear stories about marriages failing, people being ill, businesses collapsing or projects imploding..
    • The key thing to look for here: where does this candidate draw the line between pragmatism and empathy? It’s traumatic to fire somebody who is underperforming but – presumably – they were underperforming. We’d prefer people who can look after the needs of the business while remaining supportive and empathetic even for employees who are not making the cut. How did the candidate try to support their teammates before things went really bad?
    • This could also be a more situational answer “the hardest thing I ever had to do was to ship ______ without a graphics lead.” This is a great way to look for leadership skills and values.
  27. Domain Knowledge This is a large set of questions relating to specific techart tasks and skill-sets. These questions are grouped by discipline (lighting, for example, or animation).
    When asking domain-related questions we want to focus on understanding where the candidate fits into the spectrum of skills related to the domain, rather than “passing” of “failing” them based on past experience.
    You should check out the appendix on Technical Qualifications for suggestions on how to rank the level of a candidate’s technical skills answers.

  28. Programming These questions are intended to identify the candidate’s familiarity with programming concepts and best practices. When interviewing code-oriented candidates remember to look for aptitude, willingness to learn, and particularly evidence of a self-help mindset, rather than simply checking to see if they have already worked with a particular technology or technique.
    In many of these questions, it will be important for the candidate to ask follow up questions to get a proper view of the problem. It’s fine to tell the candidate in general terms that part of the exercise is seeing what questions they ask. Avoid volunteering extra detail unless the interviewee is getting stuck – the questions they ask provide a good insight into their approach to problem solving. When asking about their own experiences, ask for additional implementation details on any tooling or optimization work. Make sure they understand how they implemented any tool and why they made specific performance decisions.

    1. Would you describe yourself as an “engineer”, a “programmer”, or a “scripter”? Why?
    • This is useful for calibration, but also an interesting way to see how they understand the use of code in production. Are they focused on using code to achieve results? On writing solid, maintainable code? Or on high-performance, carefully optimized code?
    1. Are you thinking about learning any new programming language? If so, why are you interested in that language?
    • Interviewer should already know (from resume and/or prescreen) what languages the candidate knows.
    • The choice of a subsequent language is a chance to see what problem spaces they are interested in: do they want to learn C++ for higher performance or to get deeper into their engine? Are they interested in Rust because of its reputation for reliability? Do they want to learn Python for its flexibility?
    • Or, do they just see code as a tool and they learn whatever their job requires?
    1. Say I have a list of a hundred thousand items and I need to retrieve items from the list quickly. Is there a data structure you’d reach for to store the items? What else would you want to know before picking one?
    • The easy answer is a dictionary-like structure with linear search time.
    • A better response is to ask if the access pattern is predictable: a dictionary is ideal for purely random access, but if you know the order in which things are going to be retrieved then a plain array might be a better choice.
    • If the candidate is thinking about accessibility, rather than speed, they might think SQL. If so, do they know how to optimize a SQL index for fastest retrieval?
    • Do they realize that merely finding one item in 100,000 is not really a terrible performance cost even if it’s not handled optimally?
    1. We have a large collection of 3-d meshes, some which might or might not be duplicates. We don’t need to code a detailed solution – but, roughly, what approach would you use to find the duplicates? Is there anything you need to know before trying something?
    • A good part of the value of this question comes from the user’s follow-up questions. Here are some of the constraints they can find out by asking. It’s generally better if they ask rather than volunteering information.
    1. We only care about geometry duplicates – no need to consider UVs, vert colors, or materials. This actually doesn’t affect the solution but it simplifies it.
    2. We only care about local vertex positions, not world space positions.
    3. There’s no guarantee that there won’t be many copies of a given mesh.
    4. A mesh is a “duplicate” if it has the same local space vertices, regardless of vertex index order.
    5. If the user is a DCC expert, they may note that ‘duplicates’ could also be instances, which can be detected by analyzing the scene graph. This is an acceptable part of the solution, but won’t catch all possible cases.
    6. This is not a real-time algorithm, but we’d like it to be performant. We don’t need a hyperfast exotic solution, but brute force will be unpleasant for users.
    7. “Large collection” is thousands of meshes and millions of vertices.
    • The key thing here is to avoid an N^2 solution which compares every mesh to every other. Any good solution will avoid looping over all the combinations.
    1. A reasonable “low tech” solution would be something like binning meshes by vert count – if two meshes don’t have the same vert count they can’t be identical, so you can ignore all the comparisons outside your “bin”
    2. A higher-tech solution would be to generate a hash value based on the topology of the mesh; that will spot identical values in linear time.
    3. You could combine a vert-count prepass and hashing for a minor speedup – does the candidate consider the cost/benefit of moving past a “reasonable” solution?
    4. If they were using the scene graph in a dcc, did they combine it with other backup methods of detecting duplicates?
    5. What’s the biggest speedup or performance improvement you’ve ever added to an existing tool? How big was the gain? Why did your fix work well? What lessons did you learn from making this fix?
    • This question is really only relevant to fairly experienced coders.
    • What is their unique contribution to the problem? Was this a purely algorithmic optimization? A reduction in unnecessary work? Was it simply tinkering until things work?
    • Senior candidates will have a solid understanding of how to spot and fix bottlenecks; junior candidates may simply get lucky. With a junior, see if they have related experiences that are leading them to have a more systematic approach.
    1. If you have written scripts for multiple DCC environments, which one is your favorite, and why? Which is your least favorite and why?
    • This question is better for candidates who are more on the “scripter” end of programming. It’s intended to get at their sense for the architecture and integrity of different DCCs. However its’ only really informative if the candidate knows at least two different environments well.
    • Here are some things they may call out:
    1. Max:
    2. MaxScript often intertwines UI state and functionality – this is a Bad Thing and should be something to criticize
    3. On the plus side, MaxScript’s DotNet bridge can be a good Unity crossover; have they experimented with that?
    4. Maya
    5. Maya used to do a good job of decoupling front end UI and back end APIs. This has been getting less reliable lately.
    6. Most experienced Maya users will prefer Python to MEL
    7. More experienced Python developers will usually have strong opinions about PyMel – there’s a tradeoff involving programmer convenience, raw speed, and startup times. Does the candidate see those tradeoffs?
    8. Have they used the thin wrapper around Maya’s C++ api? This is effectively writing C++ in Python, an awkward but powerful tool – do they have interesting things to say about this hybrid kind of programming?
    9. Blender
    10. Blender has a very extensive Python integration, do they have strong feelings about how easy it is to extend Blender without going to C++?
    11. Because Blender’s API is a lot more “pythonic” than Maya’s it can be an interesting point of comparison. Does the candidate have insights into the pros and cons of the two approaches? Are there performance or maintenance implications they can see?
    12. Zbrush
    13. ZScript is a very unusual language – does the candidate have strong feelings about the language design and its pros/cons? Are their opinions well supported?
    14. Has the candidate tried creating a DLL to extend Zbrush’s limited scripting capabilities? This is awkward to do, but can be a good sign of willingness to push technical boundaries.
    15. Photoshop
    16. Photoshop scripting is notoriously inadequate. Does the candidate have strong feelings about it? Can they describe the limitations clearly?
    17. What have they tried to get around the limitations of Photoshop’s native scripting? Have they, for example, tried to make a COM wrapper tool that talks to Photoshop directly?
    18. Tell me something about your debugging process. When you get a bug report, how do you isolate the problem and start fixing it? What about when the problem is non-specific, like an operation that works but just seems “slow”?
    • Does the candidate use a debugger, or do they fall back on beginner tricks like print debugging?
    • Does the candidate talk about the use of asserts to track expected conditions? If so, do they consider conditionally compiling the asserts out in runtime code?
    • Do they talk about structuring code to validate all the invariants early / early-outing when there is missing information? This is not, strictly speaking, debugging but it’s a good indicator for experience and code cleanliness.
    • When dealing with non-specific issues like “slow” code, do they use a profiler (good!) or do they attempt to optimize by instinct (less good)?
  29. Shaders & Graphics These questions are intended to assess the candidate’s familiarity with shaders and the architecture of modern graphics. In addition to familiarity with common techniques, it’s important to know if the candidate understands the performance and memory implications of their choices. The ability to make smart tradeoffs, rather than simply enabling all the high end features, is a key skill set in this area. Note that PBR related questions ought to work well with candidates who are farther towards the “artist” end of the techart spectrum.

    1. How would you describe physically based materials to a non-technical artist? What makes PBR different from an older shading model, like Blinn or Phong? What do PBR materials not do?
    • This question is not about the hardcore math needed to implement a BRDF – it’s intended to see if the candidate can communicate a less-technical introduction to PBR. Does the explanation seem to show skill at packaging up complex ideas into simple, concrete, artist-friendly concepts?
    • From an artistic perspective, a PBR material is great at photorealism but can be used for other things as well. Does the candidate conflate “PBR” with “only realistic”?
    • Technical details:
    • A generic low-level answer is that a PBR shader ensures that the diffuse and specular values are properly related so that you can’t get both a high diffuse response and high specular at the same point in space.
    • An intermediate response recognizes the role of metallicity: a metallic reflection colors the specular highlights, a non-metallic one does not. An intermediate value like 50% metallic implies that at some more detailed resolution the surface includes both metallic and non-metallic responses.
    • Intermediate responses also reflect the gamut of real world reflectance, which is smaller than the mathematical 0-1 range.
    • Very dark materials (coal, tar) have an albedo of about linear 0.04. Very bright materials (snow) have an albedo of around linear 0.8
    • Advanced responses may mention energy conservation (ie, the tradeoff between diffuse and specular response reflects the re-distribution of the incoming light energy – an old-fashioned Blinn-Phong shader with both albedo and specular responses turned up high would actually “reflect” more light that it was hit by.
    • Does the candidate seem to understand the way roughness or “microfacets” control the tradeoff between diffuse and specular reflections?
    • Does the candidate know that there are multiple, competing implementations of “Physically based” shaders?
    1. When would you recommend using a specular workflow for PBR materials? When would you recommend using metallic?
    • There’s no technically right or wrong answer here, since both workflows can produce similar results. The key thing to look for is the candidate’s understanding of the relationship between workflows, artists preferences, and ease of maintenance. More senior candidates will tend to understand the tradeoffs; more junior candidates will tend to frame this in terms of “better” and “worse” when those are really situational.
    • Some things a candidate may observe:
    • A metallic workflow can be easier on some artists, since it’s explicit about where something is supposed to display metallic (colored specular) behavior, and the “metalness” color lives alongside non-metallic albedo.
    • In custom shaders a metallic workflow can allow you to save some texture memory since you only need a single metallic channel, rather than a 3-channel RGB input.
    • It’s generally easier to police the values in a metalness workflow, since you know if a given pixel is or is not supposed to behave like a metal.
    • A specular workflow can be easier for artists who want precise control more than physical correctness.
    • A specular workflow can make it easier to approximate effects that don’t fit the classic hard-surface shading model.
    1. When working with a PBR material pipeline, what can or should be done to make sure that the artists are creating properly set up PBR materials?
    • This touches on production experience and tooling, not just shader knowledge. Does the candidate have insights into user behavior? Into the tradeoffs involved in aggressively enforcing technical correctness?
    • Technical details to watch for as evidence of sophistication:
    • Appropriate range of albedo values (typically, linear 0.04 to 0.8, though some shaders normalize this)
    • Clear understanding of the difference between specular and metallic workflows. It’s easy to know if a metallic-workflow texel is supposed to be metallic or not – what about a specular workflow?
    • What about color linearization? Does tooling know/care about the color space of input textures?
    • What does the answer tell you about the candidate’s approach to user education?
    • Do they expect to solve correctness problems exclusively through training? How well will that scale for a distributed production (or for our million plus user base)?
    • Do they expect to reject bad content using tools (ie, rejecting source control submissions with out of range values)? Who will be responsible for helping the users understand, fix, and hopefully not repeat their mistakes?
    • What’s the implied cost/benefit tradeoff between technical correctness, artistic freedom, and the need for a human being to make judgements? How flexible is the candidate’s approach?
    1. How do you go about diagnosing the performance of a scene in your projects? What tools do you use to spot performance problems? What do you look for as likely causes of poor performance?
    • This is a good diagnostic for technical experience with graphics performance. More junior candidates will probably focus on rules-of-thumb, such as “don’t use compex shaders” or “reduce polycounts.” More experienced candidates will use diagnostic views (such as shader complexity or overdraw displays). Senior people will tend to reach for tools such as PIX, GPA, or RenderDoc.
    • If, as interviewer, you’re unfamiliar with their preferred tool: see how well they can explain it to you. Are they good at giving you a sense of how the tool works as well as what information it contains?
    • This question will tend to naturally segue into talking about optimization, so check the notes on the following optimization related questions too.
    • For answers relying on rules of thumb, how accurately does the candidate describe those rules?
    • Do they know the difference between the physical vertex count of a mesh and its runtime representation, where differences in normals, UVs, or vertex colors can effectively “duplicate” vertices?
    • How do they weigh the relative impact of draw call count, shader complexity, texture sizes, and polycount? In practice any of these could be a bottleneck, do they recognize that or are they convinced that only one of these things matters.
    • Do they have a mature approach to the difference between “good rule of thumb” and “scientific truth?”
    • For answers based on diagnostic views, how well do they understand the relative costs of shader complexity or ALU vs overdraw?
    • On most modern hardware, overdraw is more impactful than shader complexity.
    • Do they recognize the fact that most ALU costs scale with screen size?
    • Do they talk at all about the costs of blowing through the texture cache?
    • Do they know that texture sampling is generally much more expensive than pure math on modern hardware?
    • Do they know that some texture formats (3d formats, floating point, trilinear or aniso sampling) are more expensive than others?
    • Do they use the Unity Frame Debugger? If so, do they recognize its uses and its limitations? This can indicate somebody who is a Unity power user (if used in combination with other techniques) or it might indicate somebody who is too narrowly focused on Unity solutions.
    • For more advanced candidates who use graphical frame debuggers, look for examples of good forensic skills:
    • How do they go about finding the relevant draws when tracking down a problem?
    • If they use PIX, are they familiar with the “Dr Pix” reporting tools for spotting things like texture bandwidth saturation?
    • Are they familiar with using resource views to, for example, see how big the textures used in a given draw call are?
    • Do they have enough experience in more than one debugger to recognize the strengths and weaknesses of each? Would they pick one to solve a particular problem?
    • Overall – a pragmatic approach to profiling and debugging (based on data) is always better than a rote approach based on rules. Different projects have different needs, and the ability to accommodate those flexibly is a sign of technical maturity.
    1. How would you go about optimizing a scene that was suffering from too much overdraw?
    • One obvious aspect of this is testing the candidate’s understanding of “overdraw.”
    • Do they know about the cost difference between alpha blended and alpha tested geometry?
    • Do they understand how hi-z or other forms of depth testing can improve performance on modern hardware?
    • Do they understand how depth sorting can influence performance in conjunction with depth testing? For example, drawing back to front works better for the visuals of alpha blending, but drawing front to back with
    • How would they approach optimizing overdraw? All of these are useful:
    • Switching from alpha blend to alpha test transparency
    • Aggressively trimming out transparent areas
    • Switching to opaque shaders in lower LODs
    • Tweaking depth sorting to increase up-front rejections
    • Do they think past just optimizing individual assets? For example, would they consider things like
    • occlusion geometry to increase pixel rejection
    • Ray-marched volumetric effects to simulate large amounts of geometry at once
    • Tooling to automatically trim alpha-tested cards close to their alpha thresholds
    • All possible ideas have pros and cons – that’s why this space is so unsolved. However real or proposed alternatives the “standard” approach are a good place to assess both the depth of the candidate’s knowledge and their technical creativity – a good place to look for #BestIdeasWin
    • Does the candidate approach this as “here’s what I would do as a solo technologist” or “here’s how I would train my artists”? How does the answer relate to Unity values?
    1. Tell me about a time when you helped a very junior artist – maybe somebody who had great visual skills but who made content that wasn’t very performant – become good at making shippable assets.
    • Ideally this can be a story question which will give you some insight into the candidate’s #InItTogether skills and their aptitude for team building as well as highlighting their knowledge of the graphics pipeline.
    • What tools does the candidate rely on for teaching?
    • Do they provide close 1:1 mentoring? (#InItTogether, #BuildingRelationships)
    • Do they create training materials?
    • Do they jump straight to talk about profiling and debugging?
    • If the story includes roadblocks, how did the candidate work around those? Were they good at helping their artist colleague past the intimidating parts?
    • Did their explanations indicate the ability to clearly and effectively summarize complex information?
    1. When you want to assess the visual quality of a frame rendered in a video game, where and what do you look for in the image?
    • This is really about the technical delivery of the image (“graphics”) and not art content or style.
    • This is a pretty open-ended question, but a good proxy for both depth of graphics knowledge and for experience of production problems. To some degree “more is better” here.
    • Likely callouts can include (but aren’t limited to):
    • Aliasing
    • Filtering
    • Framerate and/or sync tears
    • Banding
    • View distance
    • Over-fogging or lack of atmospheric scattering
    • Shadowing artifacts (bias, noise, resolution, cascade boundaries)
    • Balance of direct and indirect illumination
    • Valid PBR behavior (eg, no colored specular on non-metals)
    • Ghosting (from too much motion blur or poorly tuned TAA)
    • Tonemapping or lack thereof
    • HDR vs LDR behavior
    • Bloom tuning
    • LOD pops
    • Framerate consistency during effects or animation.
    • The candidate will probably have a few of their own as well. Are these considerations realistic or are they a bit out of date (say, heavy focus on polycounts or particle counts)?
    1. What do you think is the most interesting near future tech in graphics right now… APART FROM RAYTRACING?
    • This is mostly an icebreaker question, since there is clearly no “correct” answer. The real intent is to identify candidates who are interested in the nuts-and-bolts of graphics tech (say, a new filtering algorithm or a better way to handle anisotropic hair rendering or something like virtual texturing that changes some of the traditional performance tradeoffs).
    • Excluding raytracing is intended to separate out people who are actually interested in graphics tech as a whole from people who are mostly focused on back-of-the-box features.
    • Depending on the position the answer could be quite relevant or it could merely be a proxy for general level of graphics background. It might be a good idea to check with your developer colleagues to know if there’s a particular tech they think is highly relevant to the job being sourced.
    1. Tell me roughly how you’d design a shader to do a procedural brick material that didn’t use any textures.
    • Keep in mind (and tell the candidate) that this is not a whiteboard coding question – this is more like the conversation you’d want to have in the kickoff meeting for a new project. The goal is to understand the basic design parameters and the expected implementation – not to produce a working shader during the interview. There’s no “right” answer they need to get – the point of the exercise is to hear them think about the problem out loud.
    • Since we aren’t whiteboard coding this also is a way to see how well they explain abstract concepts verbally. A helpful prompt is “I’m the producer and I need you to explain to me what work will have to get done so I can schedule it.”
    • On the purely technical level, this question is going to need the candidate to show they get basic shader math. But it’s also about what questions they might ask; jumping right into math without asking any questions is not ideal.
    • Do they think about satisfying well known needs? Or will they jump in and try to satisfy all possible needs?
    • If they do ask, stipulate that this is a realistic material, intended to be used in a procedurally generated urban environment. Will need a number of different colored bricks with varying colors, brick aspect ratios, amounts of mortar, and regularity. Does not have to handle custom brick bonds or anything besides rectangular bricks – no herringbones, hex-tiles, etc. The no-texture requirement really means “no user supplied textures” – lookup textures or shared resources, like a detail bump map, are allowed.
    • Do they ask about performance budgets?
    • A no-texture solution is good for memory-constrained platforms – but not so good for one that is struggling on ALU or, possibly, on mobile battery life.
    • In this case we’ll be running on midrange PC hardware, so this does not have to be a highly optimized solution but reasonable caution around shader complexity is required.
    • Do they ask about target hardware and platforms?
    • 4k resolution is extremely important for something like this – a brick pixel shader that looks good covering 10% of the screen could become very slow when filling a 4k screen unless it’s properly implemented.
    • In this case we are expecting to run on 4k (as above, using mid-level PC hardware) – this means that performance at full screen will become an issue for many techniques, especially those that try to naively use noise for variety.
    • Do they ask about other constraints, eg, camera moves or animation?
    • The shader doesn’t need to animate but does have to handle the case of a camera getting quite close. The camera will be flying through a procedural environment and can’t stutter when the entire screen is filled with a fairly close up shot of the bricks
    • Since the camera will be flying through the scene, noise sizzle might be an issue.
    • Here are some follow up questions to help the candidate focus if they get stuck.
    • How are you going to tell the shader when you are in brick or in mortar?
    • This is probably a variation of “multiply the UVs by some number of repeats in X and Y; use floor or ceil to know which “brick” you are in and use frac to get the uv space of a single ‘brick’
    • The parts of the brick which might want to be beveled can be identified using the same trick
    • Adding a secondary distortion to the UV value driving allow the above will allow for a bit of natural variation to make things less mechanical and perfect
    • How are you going to let the user offset the bricks to accommodate different styles of brick setting?
    • Likely, use floor or ceil on one of the coordinates to get the row; the modulo (row, 2) will be 0 for odd rows and 1 for evens; you could add some fraction of that modulo to offset the row.
    • They’ll need controls to manage the height and aspect ratio of the bricks and the degree of offset on alternating rows.
    • An advanced version might allow you to isolate every Nth course for, say, a Victorian style ‘striped’ brick.
    • How would you differentiate between the bricks and the mortar?
    • If they can get the row-column part figured out, having 0-1 uvs inside each brick lets you use simple math to see if you’re close to the edge
    • Thresholding that value will give you “brick” or “mortar.” at the simplest level, output a different color, roughness and normal value based on that.
    • A nice touch would be to increase ambient occlusion inside the mortared regions.
    • Another nice touch is to slightly tweak the normal of the entire brick section to avoid overly uniform lighting.
    • How would you add randomness to the color of the bricks ?
    • There are lots of valid answers here; noise being the easy one. However, can they describe how they would make each entire brick get a consistent random color? Typically that would involve generating a hash value from the row and column of the brick (see above) then using that hash to pick a color somehow.
    • Do they note that using the row-column hash alone will mean that the Nth brick of every texture repeat will be the same? How would they try to make sure that brick [0,0] is not the same color on two different buildings?
    • Or.. do they just add some per-instance UV offset to hide the repeat?
    • What issues are likely to be a problem with this shader in production?
    • All the easy answers to this question will include aliasing for thin lines – this is just a problem with this kind of math-based approach.
    1. Don’t bother to make the candidate explain a fix, but knowing that it’s a likely problem is a good differentiator between intermediate and advanced.
    2. The likeliest “fix” you’ll get from somebody who has not had to do this before is a super-sampling approach (where you check the neighorhood of the pixel a few times to avoid aliasing). The is quite reasonable but has interesting perf implications (see below)
    • Another differentiator will be the way the cost scales on 4k hardware – since the solution is per-pixel, elaborate procedures will probably be expensive close up. Again, a full solution is out of scope, but this is another marker of experience.
    • Experienced candidates might note that many noise types will “sizzle” during flythroughs. A mix of different frequencies and fixed resolutions is a good way to minimize this.
  30. Animation & Rigging These questions are intended to assess the candidate’s familiarity with animation overall, particularly the problems associated with maintaining an efficient and orderly animation production. In addition to familiarity with common techniques, it’s important to know if the candidate understands the production implication of their choices. Understanding what kinds of changes have large and potentially costly ripple effects is a key skill set in this area. For more technically oriented roles, questions from the 3D math section will also be useful.

    1. What happens when you have to change the skeletal proportions of a character that has already been rigged and animated? Let’s say the art director asks for the legs to be longer or shorter.
    • One of the most obvious signs of production experience in a candidate will be realizing how much of a problem this kind of change can be! This may be a good opportunity for an anecdote from the candidate about how to prevent this from happening at all.
    • Have they had to give #FierceFeedback to a colleague who was cavalier about creating so much un-creative work?
    • Do they, for example, produce trial rigs and trial models to check proposed proportions and ground speeds in game before creating fully detailed models.
    • Junior candidates should have a general understanding of the ripple effect of changing the skeleton. See if they recognize that
    • Skin weightings may need to change
    • Variant models that rely on the main animation skeleton probably need to change.
    • The rig controls will need to accommodate the new proportions.
    • Any serious change in leg proportions will almost certainly invalidate most if not all existing animations on purely aesthetic grounds. Perceived weight and character will be affected, perhaps very badly, by changing the stride length.
    • Most likely, all existing animations will need to be re-exported and re-imported (unless the project is already using some kind of runtime retargeting) even if the rig is not broken by the proportion change.
    • More advanced candidates may have some further observations:
    • A completely automated re-export is unlikely to work perfectly
    • DCC files might not reflect the real state of the in-game assets
    • The rig might not properly accommodate the changes and require manual fixes
    • Older files might be using obsolete version of the DCC rig, requiring upgrades
    • File names or locations may have been manually changed between the DCC and the game.,
    • Changes to leg length ought to translate into changes in stance, weight distribution, and ground speed – simply IK’ing the feet in place will not be enough to preserve the motion.
    • Rigs by themselves won’t be able to handle this kind of change without a lot of bespoke tech
    • External programs like MotionBuilder can do a good job at reprocessing animations for this kind of change – do they have experience with that kind of large scale intervention?
    • Runtime retargeting may offer enough control to make this work plausibly – or it may not. Depending on the implementation of the runtime system there may be permanent runtime costs that will have to be budgeted for if every frame of this character’s movement set has to be reprocessed on the fly
    • Intermediate or advanced candidates may have built or contributed to a home-grown retargeting system. If so:
    • Can they describe the mechanics effectively so you understand the system?
    • Is it more sophisticated than simply baking old control positions to world space and then constraining updated rig to the old control positions?
    • Did they create a fault-tolerant batching system to handle the inevitable broken files, or did they create a system that required lots of manual intervention?
    • If the solution was very manual, how much of the work fell onto animators vs. TAs? Was the outcome driven by a desire to be helpful, or a desire to avoid doing grunt work, or by existing power dynamics within the project? Was it handled in an #InItTogether spirit?
    • If the system relied on third party tools (ex, MotionBuilder), how sophisticated was their method of handling data transfer between DCCs?
    • This question can throw a lot of light on the way the candidate thinks about a pipeline in general.
    • Where, for them, is the real source of truth about the animation set – is it what’s recorded in the DCC files, or what’s actually in the game?
    • Do they view the problem through the lens of data transformation? Or of maintaining production assets? There’s not a “right” answer but different answers might indicate somebody who is primarily interested in tech solutions, somebody who is interested in keeping animators happy, or someone who wants to save time and money on the production floor. This can be a good entry point for followup on their ideas about who the real “internal customer” is for TA work.
    1. Tell me about your process for designing, building, and supporting an animation rig. What steps do you take along the way from the first request to the final production? What is the biggest challenge to delivering a good rig for a production?
    • This question touches both on the technical side – how to build and properly release a rig – and on how the candidate fits into the production process.
    • Do they actively try to solicit input, or do they make rigs based on their own perceptions of need? Does their process include #DrivingForAlignment about requirements and project goals?
    • What attitudes do they show towards user requests? Do they have a meaningful engagement with their users? Do they just do everything they are asked to do? Do they actively try to educate and train users on new techniques? Does their approach to user needs reflect an #UsersFirst point of view?
    • When assessing their release process
    • Do they know how to deliver on time?
    • What steps do they take to ensure the quality of their deliverables?
    • Do they show a good understanding of responsible version management?
    • Do they plan releases in order to minimize disruptions?
    • Does their choice of “biggest challenge” reflect a technologist, a service-oriented teammate, or somebody who is good (or bad) managing complex processes?
    1. What was your worst experience of building and releasing a rig or character? What did you learn from that experience?
    • This question is a good match followup to 6.3.2 above.
    • Check question 3.2 for examples of things to look for in this example regarding social interaction and Unity values.
    1. What is the best experience you have had integrating mocap into a production? When did it feel like an artistic tool rather than simply a mechanical method for collecting data? Alternately, when have you seen mocap go really badly, with bad outcomes for the team or the project?
    • You’ll want to save this question for people who have had more than one experience with mocap.
    • From a technical point of view, this is an opportunity for deep dives on cleanup and data management techniques
    • Did they have to manage lots of takes? Did that require them to, say, develop a database or a quick preview tool – or did they just brute force their way through thousands of files?
    • Did they have to deal with pipeline issues in terms of calibrations, matching skeletal proportions, and setups? If so, did they solve those problems well?
    • From an artistic perspective, did they find or help to find a way to combine the technical quality and turnaround time of mocap with the precision or tightly tuned hand animation?
    • Was there some social engineering involved getting animators to feel comfortable with mocap? Did the process show some #DrivingForAlignment
    1. What do you think is the ideal way to handle attaching props to characters? How do you divide the work between animators, modelers, riggers, and coders?
    • In addition to design insights, this question can be a window into production experience. There are many potential complications here which might come up and provide insights into the candidate’s exposure to complex production scenarios. It’s not always possible to judge the effectiveness of a solution from a verbal description – but this can be an excellent way to find out how deeply the candidate understands this problem space

    • Here are some examples of possible complexities in an attachment system you can ask about to test the candidate’s ability to solve a complex problem.

      • Items with multiple grab points
      • Items which can be wielded in one or two hands (eg, a baseball bat)
      • Items which can be passed from hand to hand
      • Items which can be attached to other items as well as used by a character (eg, a bayonet)
      • Items with animatable grab points (eg, a shotgun with a pump)
      • Animatable items
      • Items which can break while being used
      • How well does the candidate envision the entire process from content creation to runtime? Are they able to foresee potential problems with data that has gotten out of sync, human error, or badly authored content? Do they have a strong bias toward solving all problems in the DCC, in engine, or by working conventions?
      • Attachments like this typically involve cooperation across multiple disciplines – does the candidate’s system give due respect to all of the contributors to the pipeline in a proper #InItTogether spirit?
      1. What’s your ideal method of setting up a character modeling and skinning process to feed into an animation project? How do you balance the workflows of the modelers, the animators, the game designers and the engineers to maximize efficiency and artistic results?
      • This is deliberately open-ended because there are many possible solutions. There’s not one proper answer. Here are some things to look for in the response:
      • Do they consider the needs of the team as a whole, or only think about the perspective of a subset of the contributors?
      • How well does the proposed solution support ongoing iteration?
      • Does the solution allow for everyone involved to quickly see the results of their work?
      • Does the solution require a lot of precise up-front planning?
      • If the proposed solution involves keeping the critical data mostly inside the DCC files (for example, all of the attachment points are part of the animation rig file and the various item models), does the candidate have ideas about how to make sure that users are working with up-to-date data? Does the solution have the potential for bugs if a user is not careful about syncing to the latest DCC files all the time?
      • If the proposed solution keeps things like attachment points inside the game engine:
      • Does the candidate have a way for animators and modelers to preview the in-game results while working in their DCCs?
      • Is the candidate somebody who might implement the proposed in-editor tooling?
      • For either In or out of editor solutions, does the candidate place a high importance on the ability of users to preview their work?
      • Is the candidate thinking outside the contents-pipeline box? For example, does the solution include ideas for things like runtime simulation of attachments (jiggling backpacks, swords hanging from sword belts, helmets that aren’t rigidly attached to heads. If so, are they also thinking through the issues that come with mixing canned animation, blended animations, and simulations?
  31. Optimization & Performance These questions involve a lot of different knowledge about interactive 3D performance. Some of these will tend to be heavily experience based (for example, a junior person is unlikely to have ever had to create their own GPU budget) so be careful to line them up with the expected level of the position being filled[b].

    1. What’s the most truly messed up performance problem you’ve ever had to deal with? How did you resolve it?
    • This question can be a technical deep dive which offers good insight into the candidate’s detective skills and understanding of modern performance issues. If it skews technical, look for evidence of how the candidate approaches complex problems.
    • Are they methodical or do they follow intuitions?
    • Do they rely on data or on rules of thumb?
    • Did they persist in the face of setbacks?
    • Do they see both pros and cons in their solution, or are they convinced theirs is the only way?
    • It’s possible that the problem and solution don’t overlap with your experience and ability to pass judgment. If that’s the case, how well does the candidate do in explaining the context to you, an untrained observer? Do they do a good job laying out the technical constraints clearly and laying out the solution logically?
    • “Messed up” might invite some discussions of human factors. What does the narration tell you about the candidate’s values in a tense situation?`
    1. Pretend I’m an art director getting prepared for a project. I need your help setting up runtime memory and GPU time budgets. Where should we start?
    • This is specifically for things like “so much GPU memory” or “effects have to render in no more than 4ms/frame” – not monetary budgets.
    • Not everyone will have been responsible for this process, and there’s no obvious “ideal” budget. The key thing to look at is how the candidate thinks it through relative to expected level and what information they seek out – it makes sense to use a genre and platform you have experience with so you can answer the candidate’s followup questions
    • Here are some things that candidates will probably touch on. In general the more considerations the candidate tries to account for, the better. There are other valid ones as well – if you encounter something surprising, try to get a backstory on why the candidate thinks it's important.
    • 4k and its impact on textures and models
    • Art style
    • Assets which do or don’t consume more memory on different platforms (eg: textures with an extra mip on 4k vs audio files which are the same on every platform)
    • Density of physics interaction
    • Depth vs breadth of content?
    • Fixed costs of different rendering pipelines (ie, back buffer target for a deferred renderer)
    • Game genre: 60 fps shooter? 30 fps slow narrative game?
    • Impact of depth-map shadows on both render times and memory budgets
    • Is the game animated?
    • Likelihood of dense particle effects
    • Number of skinned characters on screen
    • Optimal texel density on different resolutions
    • Streaming vs fixed load
    • Texture compression options on different platforms
    • The impact of spec choices on the process - both on target budget(s) and on how much can or should be shared between high and low spec builds.
    • Virtual texturing
    1. What do you think is the biggest mistake people make when trying to understand the runtime performance of an asset or a scene?
    • This is deliberately open ended to avoid a “gotcha” question. There are so many scenarios that it’s hard to generalize.
    • One important clue to professionalism here is context: are they clear on how their answer relates to particular genres or platforms? Or do they think there’s one right answer for all scenarios?
    • Since the question is about other people’s mistakes, it can be evidence for a #BestIdeasWin mindset. If the candidate advocates for objective profiling data, clear presentation of that data, and using the data to change behaviors is a good trait here.
    • Conversely this can also be evidence for an #InItTogether spirit: do they believe in collective problem solving ? Or in a highly centralized, command-and-control approach to optimization based on standards and processes.
    • Here are some areas which will probably come up. As they offer a suggestion, get details as to why they think a given performance factor is “the most important” and then be sure that they can back up their opinions.
    • Vertex count.
    • Do they know the difference between the number of vertices visible in the DCC and the number that matters at render times?
    • Do they talk about the cost of additional UV channels or vertex color channels?
    • Do they know how very small triangles can end up performing like overdraw?
    • Shaders
    • Do they seem to have good intuition about the relative role of ALU and texture access?
    • Do they factor in use cases, such as a pixel shader that’s affordable on a small rendered object but sluggish when it becomes full screen?
    • Overdraw
    • Do they recognize the importance of overdraw on mobile and low-end hardware?
    • Do they have a good understanding of the difference between alpha blend, alpha test, and opaque geometry?
    • Depth
    • Do they understand the implications of using early depth draw or Hi-Z for opaque performance?
    • What kind of content works well with an early-depth rendering strategy? What kinds don’t?
    • Shadows
    • How do they account for the impact of shadows?
    • Dynamic vs baked?
    • Cascades and draw distances
    • Clipping and PVS
    • Do they propose things like shadow-only geometry, low-lod shadows, or blobs?
    • Scene composition
    • Do they talk about the impact of camera angles on perf problems?
    • Do they talk about impostors, billboards, and HLODS in addition to conventional LOD
    • Do they talk about the use of GPU instancing?
    • CPU vs GPU
    • When asked about “optimization”, do they default to assuming only render optimization or do they include overall scene throughput?
    • What are some ways they might try to change the balance between CPU and GPU in a scene that’s out of balance.
    • Memory
    • Making memory the key indicator could indicate a concern with stability (OOM crashes), with performance (affects both cpu and gpu) or with platform limits.
    • Since the question focuses on “mistakes”, how do they tie that back to memory – is it about process (budgeting / standards) or about analytical tools (memory profiling, reports, visualizations) or something else?
  32. Modeling & Texturing These questions concern the candidate’s familiarity with contemporary art workflows, particularly the pipeline from DCC tools like Maya or Blender to realtime engines. These are mostly questions about artist workflows rather than about tools dev problems.

    1. What’s a workflow you wish you were part of your favorite DCC tool – something from a different program that you are envious of?
    • This is intended to see how broadly the candidate is aware of alternate ways of working – do they have a good grasp of popular workflows and the value proposition of different DCCs. For a very simple example, imagine a Maya user being jealous of Blender geometry nodes or a Blender user who likes Max’s poly modeling tools.
    • There’s no obvious “right” answer but good answers will touch on what makes a good workflow in the candidate’s mind: is it simplicity, or power, extendability, or something else?
    • Do they appreciate the distinction between a “workflow” and a “feature”? It’s OK to nudge them in this direction if they don’t get it right away. The key value here is to have an eye for what enables users to get an end result with minimal distractions.
    • What does their answer tell you about their #UsersFirst sensibilities? Is this a technical feature or an artist-enabling feature?
    1. If you had to help an art team quickly and efficiently texture polygon modeled buildings, what tools would you find or create to help them out?
    • This question is really best for people who have supported environment artists in the past. It can be a proxy for experience in architectural / hard surface workflows.
    • There are many good ideas to explore here; many candidates will have seen at least one or two, experienced candidates may have seen or implemented several.
    • Consistent planar projection tools - this would be tools that apply a UV mapping at a consistent real world size, normal to the geometry, and with consistent orientation in world space
    • Purely triplanar tools aren’t sufficient for this, since they don’t work with buildings that might not be aligned with the world grid or with things like beveled or rounded corners.
    • It’s interesting to see how the candidate thinks about scale: consistency matters a lot, the world-space > uv-space ratio is something that could be addressed in shaders instead of tools.
    • Horizontal Alignment tools - tools to align the left and right edges of UV shells – so, for example, a brick pattern properly wraps around a corner
    • A fancy version of this might be able to automatically apply consistent planar projects and then align and unwrap them in series
    • All solutions have the possibility of ending with one visible seam, since you can’t guarantee that all the wraps will end at an even multiple of U
    • Vertical Alignment tools - independent of scale, it may be important for texturing to have UVs that align with with the floor or ceiling of an interior
    • What about floorboards and ceiling mouldings?
    • UV patching tools - tools to help fix up a projection after a polygon edit – for example, after adding some irregularity to an already mapped wall user can make sure that the irregular vertices respect the overall planar projection of the wall.
    • Camera based UVs - for situations where the camera is heavily constrained, camera based uvs can be an excellent way to optimize texture usage and drawcalls for skybox-like geometry. This probably implies integration with a 3d paint program but could also be used with baked projections.
    • Roof projections - does the answer handle inclined slopes? How are roof UV’s to be aligned?
    • Swappable parts - does the answer envision mix-and-match texturing that can accommodate the presence or absence of mouldings, floorboards, and lintels
    • Trim sheets - For very constrained applications, what about support for special UVs to accommodate trim sheet atlases?
    • This question can go deep into 3d math – an advanced user might know how to construct a UV projection matrix.
    • What do the answer’s tell you about candidates’ bias toward or against large scale automation? Do they want a pipeline that works “automagically” or do they just want to accelerate common manual workflows. If the former, do they appreciate the complexity of the tooling required? If the latter, do they think the potential gains are a game-changer or are they merely incremental?
    1. How would you go about designing a modular kit-bashing system that would make it easy to cheaply assemble a wide variety of urban environments? What would you worry about while creating a workflow for the artists and designers using the system?
    • Again, this is open ended on purpose. Use follow-ups to see how thoroughly the candidate has thought through the issues, not to probe for a single “right” answer.
    • This question has two dimensions - technical and interpersonal. This can be a useful lens for a #GoBold technical vision, an #InItTogether spirit of teambuilding and collaboration. It might also
    • On the technical side:
    • What kinds of automatic validation does the proposed system include?
    • How would the candidate propose to tell the system about the pieces? Metadata, naming conventions, an asset database?
    • How does the candidate account for material variations?
    • How does the candidate account for trim sheets or other methods of varying visible decor?
    • Does the system assume a human operator, or is it designed to be driven algorithmically? If so, is it more than just dice rolls? What underlying constraints might it include?
    • On the social side
    • What does the candidate’s proposal tell you about how they approach collaboration? Is this more of an educational project or a “my-way-or-the-highway” set of rules?
    • Are the candidate’s expectations for user behavior reasonable? Or are they too optimistic or pessimistic?
    • What concrete suggestions do they have about how to create a positive, self-sustaining user community for the system.
    • Finally,look at the quality and depth of the followup questions the user asks. You can customize the scenario as needed to match the needs of the role you’re hiring for, though you should use very similar specifications for all the candidates in a given hiring round.
    1. What do you look for in a model when you’re evaluating the UV and material assignment? What can you do to improve a model with poor UV or texturing choices?
    • There are several things to balance in texturing and UV layout – does the candidate recognize all of them? This is a good way to gauge experience in technical modeling.
    • Efficiency – maximizing the number of texels that will be actually used
    • Authalism – consistent, stretch free coverage with square pixels
    • Seams – hiding seams in unobtrusive places is important visually. Minimizing the number of seams overall is important to keep vertex costs down. But too few seams leads to stretching
    • Structure – UVs that have a natural flow (for example, along the length of an extruded shape). When is it better to have dedicated UV channels for areas which have inherent directionality?
    • PTex or other schemes for UVs outside of 0-1
  33. 3d Math Questions about linear algebra, trigonometry, and higher-order math. These will not be appropriate to all roles but are useful for gauging a candidates’ technical depth. Many of these will be appropriate for shader or animation/rigging candidates.

In addition to checking basic knowledge, these questions are a good index of how well the candidate is able to explain fairly abstract technical concepts. 1. What do you know about a dot product? * Do they know the geometric meaning? The cosine of the angle between two normalized vectors?) * Do they know the algebraic meaning? For any vectors A and B, (A¹ * B¹) + (A² * B²) ….. + (Aⁿ * Bⁿ) * Do they know how to use it for, eg, a simple look at rotation? 2. What do you know about a cross product? * Do they know it returns a vector normal to the plane of two other vectors? * Do they know it’s only valid for exactly 3 three dimensions? * Do they know that the length of the cross vector is the product of the lengths of the two input vectors (so, it’s 1 for normalized vectors but not for other cases) 3. What do you know about the numbers in a typical matrix? * Almost every candidate will know that a matrix represents a position, rotation and scale transform all at once (only very junior candidates should not know this). This question is a good proxy for both linear algebra knowledge and their ability to explain complex concepts in plain speech. * Depending on the candidates’ background they may be thinking of “rows” and “columns” differently than you do – just make sure you and the candidate are referring to the same things. In this example a “row” is a11 through a14, a column is a11 through a41. * * Are they aware of matrices other than a 4x4 scale-translate-rotate matrix, like 2D or 3D rotation matrices (2x2 and 3x3, respectively)? * Do they know that for rotations matrices the “rows” are the basis vectors (so in the above [a11, a12, a13] is the local X axis, [a21, a22, a23] is local Y, and so on? * Do they know that the length of the vector in each row is the local scale along that axis? * Do they know what a homogeneous coordinate (ex: a44) is and how it works?
* Do they know what the determinant of a matrix means in general terms? 4. Assume I don’t know much about how graphics are rendered. Can you help me understand how the computer turns triangles, lights and so on into something that looks like a lit surface? * This is deliberately vague, to avoid setting a candidate up for a pass-fail situation. Unless the position being hired really needs the ability to write a renderer this is intended to get the candidate’s general familiarity with modern graphics, not to make them prove their ability to recite formulas. Clear plain-language explanations are very useful here. * A junior candidate will generally have only a hazy idea of the actual math. * A more knowledgeable candidate will probably know the N-dot-L “Lambertian” equation – the brightness of a point on a rendered surface can be represented as the cosine of the angle between the surface normal and the light. * The next level of sophistication is specular reflections, which are traditionally represented by the Phong or Blinn-Fong equations, which take into account cameras ability to see the reflected vector from a light source and then “tighten” the resulting highlight by simply raising the resulting coefficient to a higher power. Only programmers are likely to know the whole equation; the key difference is that Lambert is view independent and Blinn-Phong is view dependent. * FInally there are PBR models, which attempt to make sure that energy is preserved by ensuring that the total reflectance of a given point is the same as incoming light – if there is more diffuse there must be less specular and vice versa. PBR models also typically account for glancing specular reflections.
7. Lighting Questions about both the theory and practice of lighting. A few of these may be useful for generalist candidates, others are specific to lighting specialists. 1. When would you use a lightmap and when wouldn’t you? * For juniors, this is a softball warmup question – you’ll use lightmaps for static objects and other techniques for moving objects. The are clues in the answer can help pinpoint experience levels * Some of the extra dimensions: * What about dynamic time of day? Is it impossible, or could you imagine swapping lightmaps? * What about memory concerns – is a lightmap ultimately a memory-for-runtime-performance swap? * Aesthetics - what do you get from lightmaps that’s more visually interesting than just shadows? * Bugs and limitations - what about… * maintaining lightmap UVs on assets * Light leakes based on topology * Sudden jumps in memory as atlasses expand * This might also reveal some attitudes about production processes:
* What does adding lightmaps do to your game creation workflow?
* Does a lightmapping workflow create technical problems to solve? Or does it create grunt work? 2. Explain the different steps a lighting artist takes when they need to create the lighting for an interior that is both affected by natural and artificial light? * How many of these come up? * Art process * image references, composition, “painting with light”, cinematography, and exposure * Lighting: * Direct vs indirect lighting, natural vis artificial lighting, role of reflections * Tech: * light values and ranges (real vs abstract) * Practicalities * Leaks, dynamic lighting, baking management, lighting department to other department relationships * Overally, does the lighter have a reasoned overview of how lighting fits into a production context? Are they used to working in isolation or cross-collaborating? * Are they self-aware about the tradeoffs between quality and production costs? 3. What are the most time-consuming stages for an artist producing the lighting in a real-time application? What could be done to reduce friction? * Does the lighter take responsibility for fitting lighting into production? Or do they see it in isolation * Do they have real insights into how the process works, or do they expect all improvements to come from outside? 4. What are some of your favorite movies, games, in terms of cinematography and lighting? * How do they talk about the images they like? Do they have good communications skills? Are they good at helping you understand their perspective? * Does their view extend past personal taste into an appreciation of how lighting contributes to entire productions? * Do they call out specific examples of good tricks and economy of effort? 5. Tell me about the relationship between Exposure and Tone Mapping – how they work together and what jobs each of them do in the final image? * Do they get the basic difference * Does their approach to the relationship reflect a clear philosophy or is it ad-hoc ? 6. If you could help design a brand-new GI system, how should that system work from an artist's point of view? * Good things to hear about: * ease of use * minimal user intervention * Scalability, * Performance * multi-platform compatibility * future-proofing, etc. * How much does their discussion reveal their grasp of lighting consents and technology in general? 8. Content Pipelines Questions relating to data flow, maintainability, and productivity enhancements in production. While many Unity roles don’t require the kind of glue-code and integration work that dominates TA work in the field, these questions often highlight the candidate’s ability to reason about workflows and to empathize with end users. 1. How would you debug visual discrepancies between engine and DCC of an asset you made? What’s the funkiest visual problem you’ve had to debug, and how did you catch it? * The goal here is to identify their ability to see how data flows through pipeline, which speaks to how well they know the types of things which flow through usually 1. Normals 2. Tangents 3. Tex coords 4. Vtx colors 5. Transforms * Generally they will have to ask what kinds of discrepancies; you can tailor the specifics to their experience. * The actual debugging process is interesting because it can show general graphics knowledge and tool familiarity 1. For example, if it looks right in maya and wrong in Unity, do you check it in blender? Do you open the FBX in a text editor? Do you look at it in Visual Studio? 2. No wrong answer, but in general showing familiarity with a broad range of tools and approaches will be stronger.
2. What DCC programs do you think have the best workflows, what have the worst workflows and why? * Like most opinion questions, it's important to try to separate out HOW the candidate reaches a judgment from whether or not you happen to agree. 1. One obvious thing to look for here is industry perspective – familiarity with multiple tools is good. 2. When they talk about workflows, do they reference multiple user types and agenda? Or are they expressing personal preferences? 3. When they say “workflow,” what granularity are they talking about? Do they mean “a suite of tools that work together’ or do they mean “a button”. * Do they recognize different subsets of users with different needs? * What values separate a “good” workflow from a “bad” workflow? 1. Do they focus on flexibility, correctness or approachability? 2. When forced to choose, which do they care about? Asking why they privilege one value over the others can be good. 3. Walk me through the full asset creation pipeline for an asset concept to ship * This question probably needs to be tweaked for the candidate's specific experience – somebody with experience in 2D will have different reference points than somebody who is working on big open world games. Think ahead about how to tweak for specificity. * Things to look for: 1. Does the “pipeline” include room for iteration and experimentation, or is it trying to solve all problems up front? 2. Does the pipeline thinking extend to things outside the asset creation stage? For example, how are concept art, graybox levels, or prototype animations fit in to the overall lifecycle? 3. What is the assumed level of error tolerance? Is this a pipeline which expects users to change their minds, make mistakes, and have to re-do things late? * Use followup questions to see what the candidate sees the most important benefit of their role. Some possibilities:: 1. enabling production artists 2. guarding data integrity 3. saving time 4. optimizing runtime performance 5. Shipping on lots of plattform 4. Tell me about your approach to optimization… When do you start optimizing? How do you measure success? How does techart interact with other disciplines during optimization? * Note for more optimization quotations see section 7.4 above * Some candidates will be thinking about performance from day one… these may focus on things like: 1. Well crafted budgets 2. Constant monitoring of budgets throughout production 3. Tight coordination with engineering and QA 4. Lots of builds and lots of analytics 5. Working with art direction to be performant early on * Others are going to assume performance is a late-production priority – they may talk about things like: 1. Having to reprocess assets in bulk 2. Having to negotiate with artists and art directors about loss of fidelity 3. Managing memory-for-performance tradeoffs 4. Diagnostic techniques (eg: PiX, RenderDoc, debug views, profilers) * A good thing to look for is strong commitment to data collection: using profiling and measurement rather than rules of thumb. * Optimization is often an emotionally charged process as well as a technical one. What does the candidates answer tell you about their attitudes towards collaboration, respecting colleagues, and #InItTogether * There’s a wealth of possible technical knowledge here, try to get the candidate to educate you about optimization in a way that lets you judge their skills as an educator. 5. What is the least pipeline-friendly tool you’ve had to work with? What have you done to try to make it a better citizen? * For juniors this is likely to be a question about the general user friendliness and reliability of the tool * For seniors, the answer is probably a tie between Photoshop and ZBrush… but poke at them to find out why 1. More seriously: what makes something “pipeline friendly?” is it something more than being scriptable? * What would make their disliked program a better fit for the pipeline? 1. Launcher applications which set up the program properly 2. Web based tools that are accessible even when working with closed programs (as opposed to cramming everything into DCC python) 3. Finding a scriptable alternative 4. Or…? * How much work is worth doing to integrate a tool? When is it better to leave that up to the artists and focus on compatible data formats? 6. Out of all the pipelines you’ve worked on, what one thing saved the most time or labor? Why was that so effective? * This question skews a bit senior, it really won’t work until a candidate has lived through two or three product cycles. * This can tell a lot about how the candidate looks at production generally: 1. Is this a technology change ? (like, “we switched to Simplygon and stopped making manual lods) 2. Is it a change in workflow? (like “we started doing graybox levels for gameplay testing before building” 3. Ist it a change in direction (“we stopped shipping on the switch!”) * The “why” is important here – saving a lot of time on an unimportant step is not a big win, but cutting out an unnecessary manual bit or tweaking the design for overall maintainability is a big deal. 1. This choice could also be about validating data – it could be “saving time” by stopping manual mistakes before they happen. * Does candidate think about the pipeline as a technical artifact only – or is it tied into a set of team relationships 1. Did the choice they celebrated sacrifice artistic or gameplay ends for economy? 7. We often have to improvise smaller pipelines to get some special kind of data into a production. Tell me about your favorite example of getting data into a product in an unexpected way. * This won’t work well for very early career candidates unless they’ve had some unusual experience. * This works well if you can provide an example of your own for context. It’s also a nice warmup question (perhaps the second or third in this area). The choice of “favorite” instead of “best” or “most efficient” is deliberate, this can be a chance to showcase creativity or ingenuity under pressure. * What does the story tell about a candidate's self image? Do they focus on technical issues (#BestIdeasWin), on making users happy (#UsersFirst), or on getting things done under tough circumstances (#InItTogether)?


Appendix: Technical qualifications Because TechArt at Unity covers a wide range of skill sets and needs, we don’t want to rank candidates entirely on a single technical axis: an expert lighter who knows everything about how to create beautiful, performant environments but does not write shaders is still an asset and we don’t want to create an abstract system that penalizes that kind of candidate. However it is useful to make sure we can gauge expectations around technical ability to make sure the people we hire will be effective. Here’s a link to our technical expectations matrix. is a rough guide to how to describe a candidate’s technical skills, which includes a few examples of specific skill sets (C#, shader programming, etc). The matrix gives some rough guidance for assessing a candidate’s technical level on a simple 4-point scale: Beginner The candidate is familiar with the most basic concepts but has little practical experience with them. Early
Candidate knows enough to do routine tasks without assistance, but may often be following instructions rather than acting from a solid understanding of the technology Professional The candidate can handle most production tasks without guidance, but may not always be able to find the optimal solution to complex problems. Has a solid grasp of the fundamentals. Expert Candidate is fluent in the technology and can handle complex problems with assurance. Able to innovate and expand the state of the art. The skill matrix can be useful for getting a sense of where the candidates fit in – hopefully, without creating an environment that’s unfriendly for candidates whose contributions are primarily artistic or organizational. If you are a domain expert and don’t see a good representation of your area in the skill matrix please consider adding more detail to the spreadsheet.