Questionable Answers

So how did you answer the quiz yesterday? To all that commented, thanks, I enjoyed reading your replies. I see these questions as possible differentiators; you might be able to tell something about someone depending on their answers. Just a theory, though, and I’m sure there will be any number of people who will disagree – I’d love to hear other ideas. Anyway, these questions I find fun to test out on people, and I hope more of you will post your or others’ answers. If someone does a serious test for correlations, awesome!

RGB or RYB? Well, of course it’s RGB if you’re in computer graphics at all. I find this an interesting question to ask anyone. I think this question used to be a differentiator for computer graphics person vs. everyone else. RYB is what we learnt in school, the subtractive primaries used for paints, while RGB are the additive primaries used for lights. But with general computer and screen knowledge increasing, and a growing number of artists using computer programs, maybe this one is more technophile vs. technophobe?

+Y or +Z? For me, this used to be computer artist vs. CAD user, and maybe still is. SGI and Wavefront were definitely +Y, using the same model and world space up as view space up. For architects the plan view is an X/Y 2D sort of thing, so +Z is then extruding the plan view into 3D. This is certainly how AutoCAD’s file format evolved, starting out in 2D. Update: more about this topic here.

Green-yellowish or red-orangish? Chartreuse is green-yellow. I think this one can differentiate men from women, excluding liqueur-drinkers and people from France. All women I asked have chosen the right answer, as it’s a fabric color; a majority of men have not (including me). Interestingly enough, Crayola got this one wrong originally, in 1972, later renaming their color atomic tangerine. For more pretty entertaining color/sex correlations, see this xkcd blog post. His post inspired me to write down these four differentiator questions I’ve tried over the years. True confession: I spelled “fuchsia” wrong, too. Fuch-sia? Really? Let’s make a t-shirt, now on sale, though I can’t imagine anyone buying one. Update: more on this critical question here.

Bottom row or right column? This used to be a differentiator for computer graphics vs. mathematically trained, though there are some subtleties there (not all fields of math use right column). Nowadays it might be more DirectX vs. OpenGL. A key thing to realize is that, in the memory itself, both DirectX and OpenGL store the translation vector in the 13th, 14th, and 15th spots in the list of 16 memory locations – after that, it’s just notation. My theory is that you imprint on whichever one you first used, sort of like vi vs. emacs back in the day. Here are responses from 1993 on Usenet to this religious question, archived at Steve Hollasch’s wonderful collection of ancient but still-valid knowledge. There are posts there from one of the authors of OpenGL and many other graphics people. My favorite passage, from Robin Forrest, “Steve Coons used row vectors for his influential early papers on transformations (University of Michigan Summer Courses in the mid 60’s). I asked him why in 1967 and he said it was because it was easier for the stenographer to type row vectors!” The oddest thing I’ve seen in this area was a book on ray tracing where the first row contained the translation vector. Self-taught is not always a plus.

Thanks to all that commented; I’m glad to know I’m not the last person on earth putting my translations in the bottom row, where the gods intended them to be (that said, we use column major in our book, following most fields of math). If you have alternate theories as to What It All Means, post them. There are other questions out there with no correct answers, like left-handed vs. right-handed, but that particular one is somewhat ill-defined to ask. My own answer is right-handed for world space, left-handed for view matrices, usually, but I really don’t care, and then there’s upper left vs. lower left for the screen’s origin. Any other good questions out there?

Questionable Content

Here are a few questions, there are no right answers (except for the one with a right answer), and please choose the first answer that comes to you:

What are the primary colors?

A. Red, yellow, blue

B. Red, green, blue

You’re making a 3D model of some object. Which way is up in your world?

A. +Y

B. +Z

What color is chartreuse?

A. Green-yellowish

B. Red-orangish

You write out a 4×4 transform matrix to translate an object. You put the translation values:

A. in the bottom row

B. in the right column

All for now, my opinions and theories tomorrow (though feel free to comment before then).

Update: “answers” here.

SIGGRAPH 2010 Courses

This year, SIGGRAPH is making a very strong push to include more game and real-time content.  A lot of the programs are yet to be published, but the full list of courses is now up on the conference website, and many of them are of interest. The courses have always been the SIGGRAPH program with the most relevant material for film and game production; this year the game side is particularly strong. If you are doing game graphics, the courses by themselves are reason enough to attend the conference.

Full disclosure – I am organizing two of these courses, so my description of them may not be fully objective 🙂

The courses which are most directly relevant to game developers:

  1. Advances in Real-Time Rendering in 3D Graphics and Games – this full-day course, organized by Natasha Tatarchuk, has been a highlight of SIGGRAPH since it was first presented in 2006 (the name’s a bit clunky, though). Each year Natasha solicits top-notch game and real-time rendering content for her course. SSAO was first presented at this course, as were cascaded light volumes and many other important techniques. This year includes presentations from game powerhouses Bungie, Naughty Dog, Crytek, DICE, and Rockstar, among others.
  2. Beyond Programmable Shading – another very strong full-day course, now in its third year. Like Natasha’s course, this course includes brand-new material every year. Focusing on GPU compute APIs such as CUDA, DirectCompute and OpenCL, the presentations tend to skew towards GPU vendors but have also included some groundbreaking game developer talks on topics like sparse voxel octrees (by id software) and parallelism in graphics engines (DICE) . This year, besides the usual suspects (NVIDIA, AMD, Intel, Microsoft),  there will be a talk by Johan Andersson from DICE (he gave the parallelism talk last year and I can’t wait to hear what he’s been up to since), one from Kayvon Fatahalian from Stanford (who has been doing some fascinating research on GPU-accelerated micropolygon rendering), and finally one from Luca Fascione of Weta. Hopefully Luca will be talking about the GPU-accelerated PantaRay system he helped design to render the jungles in Avatar. PantaRay is used to precompute occlusion; a very game-like thing to do.
  3. Stylized Rendering in Games – in recent years, games have started to explore the universe of possible styles beyond photorealism. The course is organized by Morgan McGuire, who is also chairing this year’s NPAR conference, and includes presentations by the developers of some of the most prominent stylized games.
  4. Physically Based Shading Models in Film and Game Production – this is one of two courses I am organizing. This topic has fascinated me for years and was a major focus of my work on RTR3. Physically based shading is currently a hot topic in film production, making this a natural film-games crossover topic (my primary focus on the conference committee). I’ve been able to get speakers with really strong film production backgrounds, so I’m optimistic that this course will turn out well.
  5. Color Enhancement and Rendering in Film and Game Production – this is my other course. Most of my work in this area is more recent than the physical shader stuff so RTR3 doesn’t have as much material on it; perhaps I can remedy this in RTR4. Although this topic is well-established in film production (a field from which I’ve been able to get good speakers for this course as well), it is still an area of active development in games, as attested by the excellent GDC 2010 talk by John Hable.
  6. Global Illumination Across Industries – this is another film-games crossover course, with presentations by top people working on global illumination in both industries (the games side is represented by Illuminate Labs for precomputed GI and Crytek for dynamic GI).
  7. An Introduction to 3D Spatial Interaction With Videogame Motion Controllers – between Microsoft’s Project Natal, Sony’s Playstation Move, and the Wii MotionPlus, motion controllers are an extremely timely topic. The speakers include Richard Marks, the brains behind the Eyetoy, Playstation Eye and Playstation Move.
  8. Recent Advances in Real-Time Collision and Proximity Computations for Games and Simulations – this is an important area, and the speakers are leading researchers in the field. Among other topics, the course will cover the the collision detection systems in the PhysX and Bullet libraries.
  9. Advanced Techniques in Real-Time Hair Rendering and Simulation – while this topic is a bit more of a niche, it is of interest for many games and the speakers have done some of the leading work in this area.
  10. Volumetric Methods in Visual Effects – one of the main differences between game and film graphics is the amount and quality of atmospheric effects. Film VFX houses have been actively developing their own systems for modeling and rendering clouds, fog, fire, ocean spray, etc. This course includes a stellar cast of speakers from Digital Domain, Sony Pictures Imageworks, Rhythm & Hues, Side Effects (developers of Houdini), PDI/DreamWorks and Double Negative; anything these people don’t know about volumetric effects isn’t worth knowing. This course is likely to have lots of good ideas for stuff that isn’t possible in real-time yet, but will be in the near future.
  11. Filtered Importance Sampling for Production Rendering – another film rendering course which is likely to yield good medium- and long-term real-time ideas. Importance sampling is crucial for efficient, high-quality reflections from arbitrary BRDFs and lighting; it can be used with environment maps as well as ray tracing. Filtered importance sampling is a more general, correct, and expensive version of the common game trick of prefiltering cubemaps for glossy reflections. It has recently found wide use in film production, a topic about which the speakers (from major visual effects houses such as ILM, Image Movers Digital and MPC) are well-qualified to speak.
  12. Perceptually Motivated Graphics, Visualization, and 3D Displays – Understanding human visual perception and how it relates to graphics is important for knowing which corners can be safely cut and which ones will yield distracting artifacts; 3D displays are a timely topic for game developers as well, now that TV and console manufacturers are getting into the act.
  13. Gazing at Games: Using Eye Tracking to Control Virtual Characters – I’m not aware of any commercial games that use gaze tracking as an input method (the course is presented by academic researchers). If existing cameras such as Playstation Eye and Project Natal can track eyes with sufficient precision, this may be an important trend going forward, but if new equipment is needed this might not be relevant for a long time (if ever).

Although not as directly relevant, some of the other courses appear to be informative and fun, such as Andrew Glassner’s course about the Processing graphics programming language, and the course on how to Build Your Own 3D Display.

7 Things for May 2

7 things, with images for each as some quick eye candy – is it worth my adding these images?

  • Here’s a nice rundown of much of the graphical goodness (and badness, e.g. temporal antialiasing) of the Halo: Reach beta. It’s worth a skim just to get a sense of the state of the art in a wide range of areas. The motion blur video appears to not be available currently. (thanks, Mauricio)
  • Unlimited Detail Technology is a voxel-based renderer with an interesting history: it was developed by a self-taught hobbyist who once ran a supermarket chain. There’s been interest in voxels for awhile, e.g. Jon Olick’s SIGGRAPH presentation in 2008 (slides here). Voxel rendering reminds me of the CPU-side heightfield renderer used in Novalogic’s Comanche and Delta Force game series from 1992 on. Novalogic’s was a 2.5 D system using contour following, while the Unlimited Detail system is full 3D voxels. Looking at UD’s presentations, it seems like a form of 3D clipmapping, where the level of detail of the voxels needed are determined by distance. The look reminds me of dribble sand castles. The coolest part: no GPU needed, it’s all CPU. I can imagine 18 limitations to this system: animation/deformation, sharp-edges not possible, shading models have limitations, transparency doesn’t work, textures are difficult to apply, fuzzy objects can’t be rendered, etc. Still, fun to see and a fascinating option. (another thanks, Mauricio)
  • The Ruin Island demo was created by some students in France. Parallax occlusion mapping, depth of field, NPR toon rendering, motion blur, glow and bloom, and more – it’s a grab-bag of effects in OpenGL. What’s nice is that the source code is provided. (Geeks3D)
  • Norbert Nopper has a small set of standalone OpenGL 3.2 and GLSL 1.5 tutorial programs with code for various effects. (Morgan McGuire)
  • The demoscene demo agenda circling forth uses particle clouds for a beautiful look. Note that the links for the video and demo are just under the image at the top of the page.
  • The photorealistic Octane Renderer uses CUDA for acceleration. To try it out you’ll need a fairly up-to-date NVIDIA driver, the demosuite, and the executable. It’s actually pretty cool to see the frameless rendering in action, it’s quite interactive for their simple scenes. There’s golden thread rendering: the longer you sit, the better the image gets. (Geeks3D)
  • 3D printing with ice. (BoingBoing)

Halo: Reach motion blur:

Unlimited Detail voxel image:

Ruin Island demo:

OpenGL 3.2 Nopper demo image:

agenda circling forth:

Octane Rendering, after 2 merged frames (interactive update) and after 5685 frames (a few minutes):

3D ice printing:

Another Introduction to Ray Tracing

I was waiting around a bit for my younger son’s doctor’s appointment this morning, so I decided to edit a book. I finished it just now, it’s called Another Introduction to Ray Tracing. It’s 471 pages in book form. You can download it for free, or order a paperback copy from PediaPress for $22.84 plus shipping. I won’t earn a dime from it, but since it took me less than two hours to make, no problem.

So what’s happening here? Due to investigating Alphascript and Betascript publishing a month ago, reporting it on Slashdot, and following up on a lot of great comments, I learnt a number of interesting tidbits. Here’s a rundown.

First, VDM Publishing itself is sort of a vanity press, but with no cost to the author. It seeks out authors of PhD theses and similar, asking for permission to publish. This is not all that unreasonable: because the works are only published on demand, the authors do not have to pay anything, they even get a few hardcopies for free. Here’s an example from our field that I reported on in February. That said, it’s mostly a win for VDM Publishing, who charge steep prices for the resulting works. Such not-quite-books mix in with other books on Amazon. It takes a bit of searching to realize that the work is a thesis and likely could be downloaded for free. A bit misleading, perhaps, but not all that horrifying. Caveat Emptor.

VDM Publishing also has an imprint called LAP, Lambert Academic Press, which does the same thing, publishing theses such as this one by Nasim Sedaghat. With a little Googling you can find Nasim, and then find the related paper for free.

VDM’s imprints Alphascript and Betascript Publishing I’ve already described, they’re little more than random repackagers of Wikipedia articles. Here’s an example book. I posted one-star reviews for a few of these books on Amazon; what’s funny is that the owner of the firm actually responded to my criticism (with a one-size-fits-all response in slightly broken English).

Four weeks ago Alphascript had 38,909 and Betascript 18,289 books listed on Amazon. To my surprise they now have 39,817 and 18,295 books, a total increase of only (only!) 914 new books – looks like they’re slowing down. They’ll have to work hard to catch up with Philip M. Parker’s 107,182 books or his publishing firm ICON Group International, with 473,668 books. The New York Times has an interesting article about this guy.

Betascript Publishing has two books found on Amazon related to ray tracing: Ray Tracing (Graphics) and Rasterization (which includes a section on ray tracing). The ray tracing book is 88 pages long and $46, more than 50 cents a page. My book, at $22.84 for 471 pages, is less than a nickel a page. So my new book’s better per pound. I actually worked a little compiling my book, making logical groupings, picking relevant articles, creating chapter headings, the whole nine yards (never did figure out how to make a cover from an existing Wikipedia image, though). The exercise showed me the limits of Wikipedia as a book-making resource: the individual articles are fine for what they are, some are wonderful, and editing them in a somewhat logical flow has some merit. However, there’s no coherence to the final product and there are large gaps between one article and the next. How to generate rays for a given camera? Sorry, not in my book.

Still, it was great to learn of PediaPress and the ability to make my own Wikipedia book for free. Poking around their site, I even found a book on 3D computer graphics, called 3D Computer Graphics (catchy, neh?). Seeing others making books, I decided to share my own, so now it’s official. Mind you, I haven’t actually read through my book, nor even really checked the flow of articles – no time for that. I mostly grouped by subject and title after identifying likely pages. That said, I do like having a PDF file of all these articles that I can search through.

Obviously authors are not about to be replaced by Betascript books any time soon. If you want to read a real introduction to the topic, a book like Ray Tracing from the Ground Up might serve you better, even if it is a whole dime a page. This cost/benefit ratio for a good book is something I’ll never get over, that books are sold at prices that are equivalent to the cost for just an hour or two for a computer programmer’s time and yet yield so much in the right hands.

7 Things for April 22

Quite the backlog, so let’s whip through some topics:

  • GDC: ancient news, I know, but here is a rundown from Vincent Scheib and a summary of trends from Mark DeLoura.
  • I like when people revisit various languages and see how fast they now are on newer hardware and more efficient implementations. Case in point: Quake 2 runs in a browser using javascript and WebGL.
  • Morgan McGuire pointed out this worthwhile article on stereoscopic graphics programming. Quick bits: frame tearing is very noticeable since it is visible to only one eye, vsync is important which may force lower-res rendering, making antialiasing all that much more important. UI elements on top look terribly 2D, and aim-point UI elements need to be given 3D depths. For their game MotorStorm, going 3D meant a lot more people liked using the first-person view, and this view with stereo helped perception of depth, obstacles, etc. There are also some intriguing ideas about using a single 2D image and reprojection using the depth buffer to get the second image (it mostly works…).
  • I happened to notice ShaderX 7 is now available on the Kindle. Looking further, quite a few other recent graphics books are. What’s odd is the differential in prices varies considerably: a Kindle ShaderX 7 is only $3.78 cheaper, while Fundamentals of Computer Graphics is $20 less.
  • Speaking of ShaderX, its successor GPU Pro is not out yet, but Wolfgang started a blog about it (really, just the Table of Contents), in addition to his other blog. The real news: you can use Amazon’s Look Inside feature to view the contents of the book right now!
  • Here are way too many multithreading resources.
  • In case you somehow missed it, you must see Pixels.

Three ways to show off your game at SIGGRAPH 2010

I recently spent a weekend in downtown LA helping the SIGGRAPH 2010 committee put together the conference schedule.  Looking at the end result from a game developer’s perspective, this is going to be a great conference! More details will be published in early May, but you can see the emphasis on games already; of the current (partial) list of courses, over half have high relevance to games.

If you are a game developer, we need your participation to help make this the biggest game SIGGRAPH ever! A few months ago I posted about the February 18th deadline. That deadline is long gone, but several venues are still open. This is your chance to show off not just in front of your fellow game developers, but also before the leading film graphics professionals and researchers. The most relevant venues for game developers are:

  1. Live Real-Time Demos. The Electronic Theater, a nightly showcase of the best computer graphics clips of the year, has long been a SIGGRAPH highlight and the tentpole event of the Computer Animation Festival (which is an official qualifying festival for the Academy Awards). The Electronic Theater is shown on a giant screen in the largest convention center hall, before an audience packed with the world’s top computer graphics professionals and researchers. Last year SIGGRAPH introduced a new event before the Electronic Theater to showcase the best real-time graphics of the year. The submission deadline for Live Real-Time Demos is April 28th (a week and a half away), so time is short! Submitting your game to Live Real-Time Demos is as simple as uploading about 5 minutes of captured game footage (all submitted materials are held in strict confidentiality) and filling out a short online form. If you want your game submitted, please let your producer know about this ASAP; it will likely take some time to get approval.
  2. SIGGRAPH Dailies! (new for 2010) is where the artists get to shine; details here, including cool example presentations from Pixar. Other SIGGRAPH programs present graphics techniques; ‘SIGGRAPH Dailies!’ showcases the craft and artistry with which these techniques are applied. All excellent production art is welcome: characters, animations, level lighting, particle effects, etc. Each artist whose work is selected will get two minutes at SIGGRAPH to show a video clip of their work and tell an interesting story about creating it. The submission deadline for ‘SIGGRAPH Dailies!’ is May 6th. Submitting art to Dailies is just a matter of uploading 60-90 seconds of video and filling out an online form. If your studio is planning to submit more than one or two Dailies, you should use the batch submission process: designate a representative (like an art director or lead) to recruit presentations and get producer approval. Once the representative has a tentative list of submissions, they should contact SIGGRAPH (click this link and select ‘SIGGRAPH Dailies’ from the drop down menu) to give advance warning of the expected submission count. After all entries have video clips and backstory text files, the studio representative contacts SIGGRAPH again to coordinate a batch submission.
  3. Late-Breaking Talks. Although the initial talk deadline is past, there is one more chance to submit talks: the late-breaking deadline on May 6th. SIGGRAPH talks are 20-minute presentations, typically about practical, down-to-earth film or game production techniques. If you are a graphics programmer or technical artist, you must have developed several such techniques while working on your last game. If there is one you are especially proud of, consider submitting a Talk about it; this only requires a one-page abstract (if you happen to have video or additional documentation you can add them as supplementary material). To show the detail expected in the abstract and the variety of possible talks here are five abstracts from 2009: a game production technique, a game system/API, a game rendering technique, a film effects shot, and a film character.

Presenting at one of these forums is a professional opportunity well worth the small amount of work involved. Forward this post to other people on your team so they can get in on the fun!

More on God of War III Antialiasing

Since my recent post discussing the antialiasing method used in God of War III, Cedric Perthuis (a graphics programmer on the God of War III development team) was kind enough to email some additional details on how the technique was developed, which I will quote here:

“It was extremely expensive at first. The first not so naive SPU version, which was considered decent, was taking more than 120 ms, at which point, we had decided to pass on the technique. It quickly went down to 80 and then 60 ms when some kind of bottleneck was reached. Our worst scene remained at 60ms for a very long time, but simpler scenes got cheaper and cheaper. Finally, and after many breakthroughs and long hours from our technology teams, especially our technology team in Europe, we shipped with the cheapest scenes around 7 ms, the average Gow3 scene at 12 ms, and the most expensive scene at 20 ms.

In term of quality, the latest version is also significantly better than the initial 120+ ms version. It started with a quality way lower than your typical MSAA2x on more than half of the screen. It was equivalent on a good 25% and was already nicer on the rest. At that point we were only after speed, there could be a long post mortem, but it wasn’t immediately obvious that it would save us a lot of RSX time if any, so it would have been a no go if it hadn’t been optimized on the SPU. When it was clear that we were getting a nice RSX boost ( 2 to 3 ms at first, 6 or 7 ms in the shipped version ), we actually focused on evaluating if it was a valid option visually. Despite of any great performance gain, the team couldn’t compromise on quality, there was a pretty high level to reach to even consider the option. And as for the speed, the improvements on the quality front were dramatic. A few months before shipping, we finally reached a quality similar to MSAA2x on almost the entire screen, and a few weeks later, all the pixelated edges disappeared and the quality became significantly higher than MSAA2x or even MSAA4x on all our still shots, without any exception. In motion it became globally better too, few minor issues remained which just can’t be solved without sub-pixel sampling.

There would be a lot to say about the integration of the technique in the engine and what we did to avoid adding any latency. Contrarily to what I have read on few forums, we are not firing the SPUs at the end of the frame and then wait for the results the next frame. We couldn’t afford to add any significant latency. For this kind of game, gameplay is first, then quality, then framerate. We had the same issue with vsync, we had to come up with ways to use the existing latency. So instead of waiting for the results next frame, we are using the SPUs as parallel coprocessors of the RSX and we use the time we would have spent on the RSX to start the next frame. With 3 ms or 4 ms of SPU latency at most, we are faster than the original 6ms of RSX time we saved. In the end it’s probably a wash in term of latency due to some SPU scheduling consideration. We had to make sure we could kick off the jobs as soon as the RSX was done with the frame, and likewise, when the SPU are done, we need the RSX to pick up where it left and finish the frame. Integrating the technique without adding any latency was really a major task, it involved almost half of the team, and a lot of SPU optimization was required very late in the game.”

“For a long time we worked with a reference code, algorithm changes were made in the reference code and in parallel the optimized code was being optimized further. the optimized version never deviated from the reference code. I assume that doing any kind of cheap approximation would prevent any changes to the algorithm. There’s a point though, where the team got such a good grip of the optimized version that the slow reference code wasn’t useful anymore and got removed. We tweaked some values, made few major changes to the edge detection code and did a lot of testing. I can’t stress it enough. every iteration was carefully checked and evaluated.”

So it looks like my first impression of such techniques – that they are too expensive to be feasible on current consoles – was not that far off the mark; I just hadn’t accounted for what a truly heroic SPU optimization effort could achieve. I wonder what other graphics techniques could be made fast enough for games, given a similar effort?

ACM SIGGRAPH 2010 Election

I received my ACM SIGGRAPH 2010 Election form today, it provides some login info and a PIN. SIGGRAPH members can vote for up to three people for the Director-At-Large positions.

I can be pretty apathetic about these sorts of elections, ACM and IEEE, I have to admit. Sometimes I’ll get inspired and read the statements, sometimes I’ll skim, sometimes I’ll just vote for names I know, sometimes I’ll ignore the whole thing. This year’s ACM SIGGRAPH election is different for me, because of issues brought up by the Ke-Sen Huang situation. Specifically, the ACM’s copyright policy is lagging behind the needs of its members.

For this SIGGRAPH election I was happy to see that James O’Brien is on the slate. In the past James has worked to retain the rights to his own images, so he’s aware of the issues. In his election statement he writes:

The ACM Digital Library has been a great success, but the move to digital publishing has created conflicts between ACM and member interests. ACM and SIGGRAPH are fundamentally member service organizations and I believe that through thoughtful and progressive copyright policies we can better align organization and member needs. Successful copyright policy has to work across formats, and SIGGRAPH is unique among ACM SIGs in that member-generated content spans a diverse range encompassing text, images, and video. Other organizations have embraced Open Access initiatives, but SIGGRAPH and ACM should be leading the way in this area.

He has my vote. He’s also the only candidate who addresses this area of concern, and in a thoughtful and professional manner. If you’re a SIGGRAPH member, I hope you’ll take the time this year to read over the statements, figure out your login ID and user number, and then go vote.