A Forum for Original Thought

Nowadays, people hunger for original analyses and theses. Their pangs are reflected in the popularity of video series like The Idea Channel, Extra Credits, The Big Picture, and TED talks. Essentially, these are just spoken essays and presentations. They don’t really utilize the video medium, other than by coupling speech with a slideshow of images and (occasionally) video clips. Yet more and more these videos are supplementing written forms like blogs and columns. The intersection of unquenchable desire for consumable media (i.e. videos) and a veritable drought of mental stimulation makes spoken essays a desirable form of idea transmission.

Perhaps the number of quick-fact “educational” videos (e.g. Minute Physics, Smarter Every Day, CGPgrey, Vsauce, numberphile) stimulated the Internet’s interest in science. Indeed, there seems to be a vibe coursing through the tubes that “science is cool”, even if the way science is taught in schools isn’t. The realization that the scientific realm, learning, and, more generally, intelligent thought can be interesting has made people desire an influx of original analysis. It stimulates the brain, giving way to more thought in a way that other media has (mostly) failed to do.

In a world with an endless volume of consumable content, our brains may have become starved. Long periods of rumination can be painful and boring, so we flood it with cheap, throwaway media. Yet these times of inward reflection may serve an important purpose. Unfortunately, our over-stimulation by Internet videos, TV, movies, video games, and music has left us unable to focus on content-delivery platforms like text. We thirst for mental stimulation, yet cannot bear to gain it by taking a step backwards. This conundrum gave rise to the popularity of “spoken essays”. They inject creative, original thought quickly and painlessly. As we mull over this gem, we can further explore the subject in the video comments. Such discussion is evidenced by the considerable quality of comments on the aforementioned videos. Trolls, raging arguments over politics and religion, and insults have given way to (somewhat) thoughtful debates about the video’s analysis. Occasionally the next video in the series might make mention of some interesting points or surprising overall consensus concerning the previous video.

But is the classroom going extinct as a forum for intelligent discussion? Does it have a place in the furious online world? Perhaps. Although quick-fact videos give information, they very rarely delve into the depths of the subject and explain it in a way that lets the viewer solve entirely new problems on their own. They give the information top-soil, but hold back any sort of theoretical bedrock. A viewer might come out feeling smarter, but she will not have gained any tools in her arsenal of critical analysis and problem solving. This is partially due to the medium. Spending a longer amount of time to explore the subject drives off the initial appeal of the videos: quick learning.

However, some video series manage to seriously teach a subject while staying interesting. Crash Course has series on biology, literature, ecology, US history, and world history, served up by the eponymous vlogbrothers. They don’t necessarily go into the same depth that a yearlong course would, but that’s not really a problem here (it’s called “Crash Course” for a reason). The fact that dozens of videos are being spent exploring one subject is a start. Another faux-classroom video venue is Udacity. Udacity is a different beast; it is much more of an exploration into online courses than Crash Course. The physical classroom is woefully unfit to teach computer science. Udacity takes a stab at creating a classroom environment that takes advantage of its medium to deliver a more fitting CS education to a much greater volume of people, while still keeping a basic academic form.

Ultimately, I see a rise in the popularity of systems like Udacity, as well as series like Extra Credits and The Idea Channel. If educators want to truly grab the interest of new generations, they need to examine that which is already capturing attention. Rather than lamenting the advent of consumable, throwaway media, embrace it. There is a place for education in online videos and video games.

Advertisements

Snow Crash

Oh. Yes. I am going to start off this post by talking about the absolutely brilliant book by Neal Stephenson (see Cryptonomicon), Snow Crash. The book that popularized the use of the word “avatar” as it applies to the Web and gaming. The book that inspired Google Earth. And despite being 20 years old, it is more relevant than ever and uses the cyberpunk theme to hilarious and thought-provoking extents. It paints the picture of an Internet/MMO mashup, sort of like Second Life, based in a franchised world. Governments have split up and been replaced in function by companies; competing highway companies set up snipers where their road systems cross, military companies bid for retired aircraft carriers, and inflation has caused trillion dollar bills to become nigh worthless.

In the book, a katana-wielding freelance hacker named Hiro Protagonist follows a trail of mysterious clues and eventually discovers a plot to infect people with an ancient Sumerian linguistic virus. The entire book is bizarre, but it has some great concepts and is absolutely entertaining. Stephenson never fails to tell a great story; his only problem is wrapping them up. Anyways, I highly suggest you read it.

Well, I’ve been thinking about games again. I have two great ideas in the works, and one of them is “hacking” game based roughly in the Snow Crash universe. It doesn’t really use any of the unique concepts from it besides the general post-fall world setting and things like the Central Intelligence Corporation. It probably won’t even use the Metaverse, although it depends how much I choose to expand the game from the core concept. The player does play, however, as a freelance hacker who may or may not wield swords (not that it matters, since you probably won’t be doing any running around).

I’m writing up a Project Design Document which will cover all the important points of the game:
Download the whole document

The Future of the Source Engine

Valve’s Source and GoldenSource engines and Epic’s Unreal engines have had a long, acrimonious feud. Both Golden Source and the Unreal Engine debuted in 1998 in Half Life and Unreal, respectively. Both were considered revolutionary games at the time. Unreal blew technical and graphical expectations out of the water. Half Life left a legacy as one of the most influential games in the FPS genre.

Unreal Engine screenshot Unreal Engine screenshot
i2Zan0DmFkTfy Golden Source screenshot

Fast forward 6 years. Valve, in the meantime, has released Team Fortress Classic and Counterstrike, both extremely revolutionary games. The Unreal and Unreal 2 engines (the latter was released 2 years prior) had become extremely popular platforms for game developers, mostly because of the engines’ notable modularity and room for modification.

In 2004, Valve debuts the Source engine with Half Life 2, a ground breaking game that completely demolishes competition and sets a long-lasting legacy in terms of story, gameplay, and graphics. For comparison, Unreal Tournament 2004 was published the same year.

Unreal Engine 2 screenshot Source screenshot

In another 7 years, Unreal Engine 3 has been released and games like Gears of War and Batman: Arkham City have been developed using it. Valve has just published their first widely supported game, Portal 2. The Source engine has been evolved over the years, and many graphical upgrades have been applied along with compatibility with major game consoles.

Batman: AC screenshot
screenshot-2

However, it becomes readily apparent that the visual styles of these two engines have diverged in the years since 1998. The Unreal line of engines have supported games like Bioshock and Mass Effect, but have also bourn the brunt of AAA games. Such games are known for their muted brown-grey color pallete, uninteresting story, and factory-made gameplay. Unreal Engine games are commonly criticized for having character models that look “plastic” (a result of game developers setting specular too high on materials), awkward character animations, and overuse of lens flares and bloom.

Games on the Source engine, on the other hand, consistently revolutionize some aspect of gaming. For example, Team Fortress 2, Portal, and Left 4 Dead are widely known for innovative gameplay. Unfortunately, Valve has lagged behind in terms of pushing the graphical frontier. Half Life 2 was smashingly good for its time, much in the same way that Halo stunned the gaming world back in 2001. However, every Source game since its debut has looked more and more aged.

Even worse, developers are driven away from using the Source engine due to a set of tools that have barely evolved since they were developed in 1998. Hammer, the level creation program, and Face Poser, the character animation blender, are unwieldy and unfinished; Source SDK tools are notorious for their bugs and frequent crashes.

Conversely, the Unreal toolset is streamlined and easy to jump into. This appeal has drawn more and more amateurs and professional developers alike. The editor allows you to pop right into the game to see changes, whereas the Source engine still requires maps to be compiled (which can take minutes) in order for the most recent revision to be played. Unreal’s deformable meshes dwarf the Source engine’s awkward displacement system.

However, I have a feeling that a couple of factors are going to come together and boost both engines out of the recent stigma they have incurred. The biggest factor is that at some point the AAA game industry is going to collapse. The other critical event is Half Life 3.

Yes! Do I know something you don’t? Have I heard a rumor lurking the Internet about this mysterious game? No. But I do know history. And that is more useful than all the forum threads in the universe.

Half Life was released in 1998. Half Life 2 was released in 2004. Episode 2 was released in 2007. Half Life 2 took 6 years to develop, despite being on a side burner for some of that time. By extrapolation, Half Life 3 should be nearing release in the next 2 years. However, circumstances are different.

The Source engine was developed FOR Half Life 2. Graphics were updated. But the toolset remained the same. In the time between HL2 and now, Valve has been exploring other genres. Team Fortress 2, Portal 2, and Left 4 Dead 2 all took a portion of the company’s resources. In addition, that last few years have been spent intensively on developing Dota 2 (which, by the way, was the cause of the free release of Alien Swarm). The second Counterstrike was contracted out. So Half Life 3 has been a side project, no doubt going through constant revisions and new directions.

However, unless Valve is going to release Day of Defeat 2 or Ricochet 2 (yeah right) in 2013, production on Half Life 3 is going to kick into high gear. There is one fact that drives me to believe even more heavily in this theory.

Since 2011, and probably even earlier, Valve has been pumping a huge amount of effort into redesigning their entire suite of development tools. It had become readily apparent to everyone at the company that the outdated tools were making it impossible to develop games efficiently.

“Oh yeah, we’re spending a tremendous amount of time on tools right now. So, our current tools are… very painful, so we probably are spending more time on tools development now than anything else and when we’re ready to ship those I think everybody’s life will get a lot better. Just way too hard to develop content right now, both for ourselves and for third-parties so we’re going to make enormously easier and simplify that process a lot.”
-Gabe Newell

Because both TF2 and Portal 2 have been supported continuously since their release, they have been the first to see the effects of this new tool development. Valve seems to have used these games as testing grounds, not only for their Free to Play business model and Steam Workshop concept, but also for new kinds of development tools. First, the Portal 2 Puzzle Maker changed the way that maps were made. In the same way that Python streamlines the programming process, the Puzzle Maker cuts out the tedious technical parts of making a level.

The second tool released was the Source Filmmaker. Although it doesn’t directly influence the way maps are made, its obviously been the subject of a lot of thought and development. The new ways of thinking about animation and time introduced by the SFM are probably indicative of the morphing paradigms in the tool development section at Valve.

Don’t think that Valve is going to be trampled by any of its competitors. Despite Unreal Engine’s public edge over the Source engine, especially with the recent UE4 reveal, the AAA game industry is sick, and no other publisher has a grip on the PC game market quite like Valve does. And although 90% of PC gamers pirate games, PC game sales are hardly smarting. In fact, the PC game market is hugely profitable, racking up $19 billion in 2011. This is just a few billion shy of the collective profits of the entire console market. Yet the next best thing to Steam is, laughably, EA’s wheezing digital content delivery system Origin.

Numbers Source

Anyways, here’s hoping for Half Life 3 and a shiny new set of developer tools!

Source Filmmaker: First Impressions

Meet the Pyro

Meet the Pyro



As you may have heard, the Source Filmmaker was released two weeks ago at the conclusion of the Pyromania Update for Team Fortress 2. To get it at first, everybody was required to submit a survey form that included basic hardware and software specs about your computer, including whether or not a microphone was attached. The idea was that a limited, graded release would help give a taste of what the tool is like without flooding the Internet with videos. However, after three weeks of semi-open beta, the SFM team has gone public. You can download it here. Here are my first impressions of the tool (there is a TL;DR at the bottom).

The Source Filmmaker is a tool that allows “players” to set up scenes within any Source game, and then edit the resulting clips as if they were in an video editing program. This hybrid system passes over a lot of the conventional paradigms in film making. You can simultaneously modify how you want a shot to look AND change how the sequence is cut together. Scenes still have props, actors, lights, and cameras. However, if you decide while editing that you want a shot of the same scene from a different angle, you can create a new shot from a new angle in seconds.

This is definitely the direction that movies are headed as a medium. Computer graphics have reached a level of visual fidelity that allows filmmakers to create entire new elements and mix that with live footage. For instance, Sky Captain (an awesome movie, by the way) was shot entirely on blue-screen in some guys basement. All the environments and non-human actors were computer generated. This allowed the maker to move the actors around as he pleased. If he didn’t like the direction they were facing or their position on-screen, he could simply move them around like another 3D asset.

Sky Captain and the World of Tomorrow

Sky Captain and the World of Tomorrow



So far I’ve used the Source Filmmaker for a little over one week, on and off (I made this). From what I hear, experts at the program can deftly make complex scenes in minutes. However, I have yet to figure out all the hotkeys and efficient methods, so it takes me a long time to even sketch out a rudimentary scene. My speed is hampered, in some part, by the strange choice of hotkeys; The lower left part of the keyboard seems to have shortcuts distributed at random. Yes, every program has such a learning period in which shortcuts are committed to muscle memory. The SFM, though, for all its similarities to 3D programs, seems to have flipped the traditional hotkey set.

I digress, however. The primary aspect of SFM that impedes my work in the program is the tool’s concept of time and animation. To illustrate, let me explain the structure of the program: Each file is called a “session”; a self-contained clip. A single map is associated with each session. A session contains a strip of “film” which is composed of different shots.

Shots are independent scenes within the same map. Each shot has a scene camera and various elements that expand upon the base map set. Each shot also has an independent concept of time. You can move a shot “fowards” or “backwards” in time, which doesn’t move the clip in relation to other clips, but changes which segment of time the shot is showing within its universe. You can also change the time scale, which slows down or speeds up the clip.

If you move a shot to be before another shot, it will not change the shot, only the sequence in which the shots are displayed. This can be confusing and/or annoying. For instance, if you have a shot of someone talking, and you want to have a close-up shot or a different angle inside of that clip, there are two ways to do so. You could go into the motion editor and move the camera within the specific segment of time within the shot. The easier way, however, is to split the shot into three clips. The end clips remain the same, and inherit the elements from the single parent shot (which doesn’t exist anymore). In the middle clip, however, you change the camera to show a close-up angle. Both of these methods look the same; until you change your mind.

After you split a clip up into different shots, you can’t (to the best of my knowledge) add in a common element that spans all three shots, even though the elements that were there beforehand were inherited by all three. If you move a prop in one shot, it doesn’t translate over. This problem lends itself to a strange workflow, in which you set up the entire scene from one camera view, and only when you are satisfied do you split it up into different clips.

But how about the other method I mentioned? The motion editor allows you to select “portions of time” within a shot’s universe. You can make changes to objects and their properties, but the changes will only be visible within that time segment. For smooth transitions, it allows you to “partially” select time, and blend between two different settings. This feature can be extremely useful and powerful, but it is also a pain in the ass. While trying to hand-animate actors, I often find myself getting annoyed because I want to go back to the same time selection and add in something, or smooth over multiple curves. Since each entity stores its animation separately (each bone in a actor’s skeleton, for instance), I often find myself annoyed because I change an animation, but forgot about a bone. The animation ends up completely screwed, and its easier to start over than fix it.

Yes, a lot of this pain is due to my inexperience with the workflow. I’m sure I’ll get the hang of working with the strange animation system. But for any filmmaker or animation starting out, it will be quite a jump from the traditional keyframe methodology. In the Valve-made tutorials the guy talks about the graph editor, which seems to liken itself to a keyframed timelines. However, I have yet to glean success from the obtuse interface, and in any case the “bookmarking” system seems unnecessarily complex.

I want to cover one more thing before wrapping up. What can you put in a scene? Any model from any source game can be added in and animated. There are also new high-res versions of the TF2 characters. Lights, particle systems, and cameras are also available. For each of these elements, you need to create and Animation Set, which defines how the properties of the elements change over time. IK rigs can be added to some skeletons, and any property of any object in the session can be edited in real time via the Element Viewer. Another huge aspect of the program is the ability to record gameplay. At any time, you can jump into the game and run around like you are playing. All the elements of the current shot are visible as seen by a scene camera. You can even run around while the sequence is playing. You can also capture your character’s motion in “takes”. This is great for generic running around that doesn’t need gestures or facial animations. If you need to change something, you can convert the take into an animation set, which can be edited.

On the note of character animation, lip syncing is extremely easy. Gone are the pains of the phoneme editor in Face Poser. You can pop in a sound clip, run auto-detect for phonemes, apply to a character, and then go in with the motion editor and manually change facial animation and mouth movements.

TL;DR: To summarize my feelings, any person who admires the Meet the Team clips or the Left 4 Dead 2 intro trailer should definitely check out the Source Filmmaker. It’s free, and the current tutorials let you jump into making cool short clips; every clip looks really nice after rendering. The program does require a lot of memory and processing power though, so you will be unable to work efficiently if your computer doesn’t get decent framerates in TF2.

Unity and tjSTAR

Here is a soundtrack for this post:

Everybody should use Spotify. It’s like magic, but real.

So I wanted to talk about Unity. For those who don’t know, Unity is a game engine. But it’s way for than that. The best way to describe is an IDE for game development, similar to UDK. Every part of the development cycle (aside from asset creation) can be done within the program, from placing assets to creating game object behavior to playtesting. The engine also has built in support for pretty much anything you would want to do. Behavior is described through scripts, which can be written in JavaScript, C#, or Boo (which seems to be a Python/C# hybrid). Assets can be imported from almost any file format, without external conversion. For instance, any 3D file format that can convert to FBX works, and image formats from PSD to PICT. Unity constantly checks for asset changes and updates them in realtime.

A good analogy involves programming languages. UDK is like Java, while Unity is Python; Source is C++. You don’t really understand how much annoying background work you are doing with Source until you start using Unity. However, unlike Python, Unity has an enormous learning curve. This comes from its being extremely powerful. I’ve only just started working with it (a few days) and I can see that there is a huge amount of potential. I also still have no idea how to most of anything.

The event that sparked interest in Unity was tjSTAR, an annual symposium held by my highschool. In addition to Design Challenge events and student presentations, there are also panels and professional talks. I attended 5 talks, all of which were fairly interesting.

Game Design and Development as an Academic Path and an Industry
This presentation is an in-depth look into majoring in game design in college, what universities offer the best programs, and how to get in; accompanied by an overview of the game industry and the careers it offers.

Mr. Danny Kim
Student
University of Southern California School of Cinematic Arts, Interactive Media Division

This was the presentation that got me interested in Unity. Of course, I had seen it before, such as at SAAST(the computer graphics course, specifically) when we looked at projects the undergraduate students had been doing, they used Unity for the most part. Danny Kim (a TJ alumni himself) also talked about how TJ is a great source of talent, both due to the large number of talented programmers, but also the great writers and artists. Any interested reader should check out his blog, See Play Live.

Big Data: What Is It and How to Cope With It
With the digital world enveloping our lives through mobile devices, digital home appliances, digital sensors and controllers, and video, data growth is expected to be massive in the coming years pushing into peta and zetabytes. Of this data, only 5%-20% will be structured. Find out how is the technology world is preparing to cope with this onslaught.

Ms. Rumy Sen
President & CEO
Entigence Corporation

This talk was about processing large amounts of data, especially sampled from online sources and social media. The objective is to analyze the whole of customer feedback across the Internet rather than from small testing sessions performed by marketing and consultant companies. However, this requires entirely new structures for storing and processing the data into a usable form. She talked about Hadoop and other forms of managing unstructured data that differed from conventional database methods, as well as processing methods such as massively parallel processor arrays.

Computer Vision: Challenges and Applications
Computer vision is the art of teaching computers to see and to understand what is in images and videos. The presentation will discuss some of the key challenges, and show practical applications.
Dr. Peter Venetianer
Director, Commercial Science Development
ObjectVideo

Computer vision is obviously interesting. The big brother of computer graphics (the two are inverse problems), it has stumped researchers for decades. The first professor to attempt the problem was sure that a summer with a lab of grad students would solve it fine. Now, 50 years later, we are starting to make some headway. Dr. Venetianer discussed some of the methods for separating critical objects from a noisy environment. Spotting movement from a fixed viewpoint is fairly easy. If you have three consecutive frames, spotting moving objects is simple using the three-frame method (it involves comparing differences). However, identifying the objects is much harder. If you know what to look for, the problem simplifies somewhat, but there are still numerous exceptions. A car is usually wider than it is tall, except when it is coming towards or moving away from the camera. A person is usually taller than they are wide, but a group of people is more likely to match the profile of a car.

Spacecraft Guidance, Navigation and Control
Guidance, navigation, and control (GN&C) is a specialty area in Aerospace Engineering that involves determination of how a vehicle gets to its target, knows where it is, and maintains its position or trajectory. These concepts and related technologies will be highlighted for spacecraft application. Some of the projects involving GN&C at Emergent Space Technologies will be summarized.

For more information, visit http://emergentspace.com/.
Dr. Sun Hur-Diaz
Vice President
Emergent Space Technologies, Inc.

Until you think about it, determining where you are in space might seem trivial. But because hardware never reacts perfectly, a spacecraft needs to constantly be checking its position and orientation. But you need a variety of instruments, such as sextants and telescopes, to determine orientation. To find your location in orbit you need at least four GPS satellites. Finding which orbit you want to go into requires some physics simulations, as well as constant corrections to maintain it. In fact, finding an orbit to optimize fuel usage and time for a set of destinations is a huge field.

Millimeter Scale Robotics Research and Development at the MITRE Corporation
As we continue to look for ways to keep soldiers and first responders out of harm’s way, the capabilities of robotic systems improve and demand for them increases. While large robots have been used extensively, the development of smaller robots opens up a range of additional potential applications, such as accessing confined areas for search and rescue or surveillance purposes. To address this emerging need, MITRE’s Nanosystems Group has been developing rugged, low-cost robots, designed to be carried in a pocket. They can be operated from a mobile phone and reconfigured in the field to quickly adapt to specific missions.
Ms. Jessica Rajkowski
Systems Engineer, Sr
The MITRE Corporation

I don’t know if I mentioned this before, but I am working at MITRE this summer, albeit in a different division. The talk was still fascinating. Some of it was about developing micro-scale “robots” using interesting properties of polymers and metals. The speaker also discussed MITRE’s development of a hand-sized field robot designed to be low-cost, low-maintenance, durable, and easy to control. Obviously this would normally violate the rule of “cost, speed, quality; pick two”. To some degree it was speed that was sacrificed. It took years to develop the robot, but at its current stage it is pretty amazing. Another subject of the talk was the speaker’s project to set up a consistent test for testing whether a producer’s robot was up to MITRE standards.

I’ve also continued to use Google AppEngine and I’m working on a forum, seen here.

TF2 Mapping Competition

I recently got back into Source mapping (I made a post about this a while ago). However, the first thing I did was say, “I’m going to find a community that does Half-Life 2 mapping.” Apparently, none of those exist. I guess I should have expected as much from a 4+ year old game. I was depressed for a bit. I though about migrating to another platform, but I’ve been working with Source for a long time and it’s the platform I’m most comfortable with. My ventures into CryEngine 3 and Unreal Development Kit, have been a bust. Maybe a project for another time.

Anyways, I decided to start mapping exclusively for TF2. Turns out, that’s a lot more fun. It’s less work to set up a fun map; more of the work lies in coming up with a good idea. Then you get to enjoy the map simultaneously with your friends, and if your map becomes popular it’s very gratifying to see people on a server playing it and having fun. I needed a platform for my maps though; a way to get them critiqued and then onto a test server. Now, I had tried out TF2 mapping before, but I had self-taught like usual and my maps weren’t very robust (or finished).

So I went on a search for TF2 mapping communities. That’s when I found TF2maps.net. It’s a community of serious mappers that hosts competitions and has servers and regular map testing events. It’s the perfect source of critical analysis and constructive criticism I need. It also helps me feel less alone while mapping, and I know that once I finish the map people will play and analyze it.

That’s one of the reasons I’ve been so busy. In addition to AP week and my Udacity class, I jumped straight into some mapping action over at tf2maps.net. I signed up just as the latest big official content had ended, so I instead decided to enter in a smaller, unofficial contest.

The contest is based around redesigning 2fort; keep the style and feel while creating a gameplay area that can handle a game of 32 player instant respawn. The description in the thread calls it a spiritual successor. Here’s a link to the contest thread: Reimagining the game’s worst map. Here is a link to a download/description page: ctf_teufort.

Anyways, I’ll be doing a lot more of that, so expect some more posts regarding that. I also want to post more videos on YouTube, so I think I’ll make some overview videos for each map I make.

Speaking of which, I fixed my screen two days ago, and filmed it! I’m going to composite the video over the weekend and then post it. In fact, I’ll be doing a lot more editing in the coming months because I’m finally getting the editing job for HDP (which I wrote a post about, too).

That concludes this series of shameless plugs.

Status Report

At the beginning of this month I declared that I would post once a day to this blog. Admittedly, I’m pretty proud. I posted 22 days out of the 29; I started out strong, and only began to falter right at the end. I’ve stuck around long enough to get a few views, but most importantly I’ve developed a habit of trying to post something every day. Speaking of which, I was going to post this yesterday but I accidentally fell asleep early.

For March, I’m going to do two things: start a private log, and start posting more videos. In terms of videos, I think I will start a series of tutorials covering the basics of Hammer and the Source SDK. I’ve already started recording some, but mostly just to get the kinks out of my production cycle. I have 2 scripts done, and a third in the working. So all I need to do is refine my style. Mainly, I need to stick to the script and be more concise. A tutorial is about informing people. I can’t focus on making something look nice, I just need to explain the very basics and then move on. I tried to think about the tutorials I watch, and what annoys me when I’m watching a tutorial. Unfortunately, I’m not very good at analyzing things like that. Looks like I’ll have to rely on Youtube comments.

I’m going to continue posting, but only when I have something interesting to say. I will post at least once a week, however.

%d bloggers like this: