Following the tutorial I whipped up the basic star collection game and made one or two changes, like adding a double jump. You can find it here.
Now to see if I can do anything worthwhile with it.
If you're adding prerecorded wavs to the Creation Kit you may notice them stuttering, distorting or playing too fast. This is a fairly simple problem to fix but might take you a while to realise. Sound files being added to Skyrim must have the following attributes as its internal player has rigid expectations.
In my case the wav files were recorded at a lower sample rate, 22050. I was able to open them with Audacity then export them at the higher sample rate with no loss in quality. Sample rate is at the lower left of the screen in Audacity by the way.
The Biztalk Deployment Framework is an extended support system for deploying Biztalk solutions. Why do you need this? Out of the Box Biztalk can be packaged up into an msi which is imported and installed through the administration console. That msi will contain your orchestrations, maps, pipelines and ports. what it won't install is your configuration, your business rules policy, your host instances, your webservice. External assemblies? Good luck. BAM artifacts? You're on your own.
The Biztalk Deployment Framework is Biztalk Deployment as it should be. Setting it up is not a one click operation. There's xmls to configure but it makes deployment one click and it has good documentation.
Rustled up a little pathfinding algorithm last week. It's not very good however and I'm wondering if I should switch to Dijkstra's with an unvisited nodes set.
A Move object contains:
-a target x,y that the move is aimed at.
-a set of x,y co-ordinates called the path.
-a complete boolean that when true indicates the move has reach the target.
Move is a type of action so it has an Act() method. When Act() is called it gets the next move from the path. If the path is empty it first calculates the path, then gets the next move.
Obviously all the real legwork is in the CalculatePath() method. CalculatePath() has access to:
-The unit the move object is acting on
-The overall map of walkable and non-walkable tiles
As it has access to the unit it knows the current x,y of the unit. It also has the target x,y of the Move. So it sets about calculating a path between the two like this.
If direction is null it simply calculates the point on the line made by current x,y and target x,y that is the unit's speed in distance from current x,y in target x,y's direction.
If direction is not null it calculates a movement in one of eight directions depending on the value of direction.
It then checks whether the move is into an area that's walkable. If it is, it adds the movement to the path.
If it's not it tries to calculate a direction based on the direction of the move that it just tried so that direction is set to be perpendicular to the obstacle.
Finally, it checks that if the move is walkable and has caused us to arrive at the target x,y then the Move's complete is set to true.
As you can see the tricky part is all in what to do when hitting an obstacle. Moving perpendicular often won't work on corners or complex objects and with no set of closed tiles it's entirly possible to get stuck in a loop trying to find a destination. Dijkstra's algorithm would be an overall improvement. I also haven't implemented an interrupt for paths which become blocked. At the moment paths are only checked for blockages at calculation time and are done in short bursts. Eventually I plan to migrate to calculating the entire path with a check on each move to see whether a new blockage has appeared, forcing a recalculation of the path. A speedy algorithm will be essential here.
Let's review our Programming philosophy. Why we code and how we code are inextricably linked. Before we get too deep into CS305 Philosophy of Computer Science though, let's start with the basics. What makes a great programmer? Larry Wall defines them thusly.
The qualifiers are always quickly given to re-assure and amuse. Lazy programmers make the computer do the work. Impatient programmers make sure the computer anticipates their needs. Prideful programmers ensure nothing goes wrong. However why do they need to be given at all? In the aid of clarification why not simply state them as
Committed to correctness
Not as snappy I'll admit but far less ambiguous. After all isn't a programmer's job to disambiguate a process, breaking it down into elementary steps that a computer can accomplish? Instead we have the sardonic wit of Larry Wall said one way, meant two others. Sarcasm does not compute, but humans do.
As well as Reflector there's a Microsoft tool that allows you to view an assembly's contents. Ildasm even shows you the code inside the assembly's methods, albeit its in Microsoft's intermediate language. I've found it useful for gaining the overall structure of a compiled assembly, unfortunately a necessity even in this black box testing world.
It's amazing what time can do to revitalise your interest in a project. FEED is still a vast scope with only one part-time worker but I think it's time to add a few more versions to the counter.
I have a fun bit of trickery I thought I'd share for setting up optional parameters in SQL Reporting Services 2008.
Suppose you have a report looking at sales. Your first variable is the sale date, the second the type of product sold. There's two steps to setting these up. First you create the report parameters, then you add them as variables to the SQL query. The first stage informs the second. Suppose we require the sales date but want the type to default to getting all types if not filled out.
What you can do is set your Type parameter to allow a null value, then have it retrieve the type values from a query. That query by the way is a shared dataset which union alls the results with 'All' as Label, NULL as Value so as to provide users an explicit 'All' option. Otherwise they'd only be able to do reports on all types by reloading the page to reset Type to its default value.
Once the optional parameter is set up in the first stage we can then add it to our select query in the second stage. For Sales Date we have
WHERE [SalesDate] = @SalesDate
so to add Type we render it as
WHERE [SalesDate] = @SalesDate AND ([Type] = @Type OR @Type IS NULL)
You can easily attach more optional parameters to make a more complex report by following the pattern.
My only concern is around the speed of the query with multiple variables. OR being essentially a UNION operation. If you have any comments on a better optimized pattern feel free to let me know in the comments.
This is one of the old computer science wars that happened before I was born but still affects me. Programming started out on punch cards and punch cards were not re-usable. Even after the switch to "virtual" software programs were large all-in-one affairs. All the code was in one place and the computer ran from top to bottom. GOTO was the hallmark evolution. If you wanted to re-use a line you could jump to it using GOTO. GOTO was a one way journey. Once you moved to that line you continued from there. This meant a program no longer just went top to bottom. With a little help from conditional checks it could jump straight to the end if this wasn't a valid input, or run the same segment a thousand times if you gave it a big enough array. However things were still monolithic.
Dijkstra was the herald of structured programming and the modularisation it brought. He wasn't the first to say GOTO was bad but he was the loudest. With the advent of structured programming computer scientists devised a new method of control, the two way journey of the subroutine. Instead of jumping to a line and continuing from there a subroutine could be travelled to, run through and at its end, control returned to the point in the program where the subroutine was called. This in turn allowed the program to return to the easily tracked traversal of top to bottom, with some detours along the way.
However with time and reflection just throwing out GOTO seems a bit immature. After all, subroutines were just an evolution of GOTO, the two way travel rather than one way. You can have more than one type of screw. While most programming benefits from the structured paradigm there's bound to be edge cases where GOTO can be used, in the same way that cursors have a place in SQL. a place, hopefully, far away from where we are but a home nonetheless.
The BRE(Business Rules Engine) is one of the most fascinating components of Biztalk. An entire system to itself it allows complex logic to be executed via IF THEN conditions. It can loop, recurse or "forward chain". All your conditions are placed in a policy, which is how Biztalk categorises its sets of conditions. However one thing to note about BRE policies is they are immutable. Once published on the server they cannot be altered, only a new version created and then published in its place.
Of course that's not entirely true. You can unpublish policies from Biztalk, though it's recommended you shouldn't outside of a development environment. Nevertheless if you want to go ahead and clean up the old versions, here's how.
Connect to your Biztalk DB through SQL Management Studio. Your policies can be found with:
Once you identify the policy version you want to unpublish use:
SET nStatus = 0
WHERE strName = 'MyPolicy' AND nVersion = 'myVersion'
with your name and version.
If you have the composer open while you do this, be sure to refresh in order to see your changes.
You can do this with vocabularies too! They're in [BiztalkRuleEngineDB].[dbo].[re_vocabulary].
It's the little details that get you down. SQL Server Integration Services are all about extracting, transforming and loading data. The big problem with doing that successfully is handling the data types. You see SSIS has a slightly different set of types to what your SQL database has. Translating between them is done automatically and just like a weakly typed language it can get them wrong. However recently I had a different issue which requires relating the whole saga.
Firstly my package hung, sitting on pre-execute. This alone was vexing for there was no obvious reason. So I turned to the old adage "Let the database handle it". I made stored procedures of my selection queries, and temporary tables to draw from.
Alas an SQL task above my data flow gave syntax errors and dutifully I went to debug. I turned off retain same connection and that seemed to do the trick. I was running, or getting closer to it.
Alas behind one error another, parameter not found in mapping. Thus I went to rename the parameters, to make them whole and clean.
I set up my package around this new way, with an Execute SQL Task, followed by the data flow. And now my data flow would retrieve the data from temporary tables that existed only as long as my connection. Of course this meant I had to turn back on retain same connection. SSIS is diligent in cleaning up after itself.
It is a strange thing but in testing I have found, using parameters in SSIS queries incredibly slows it down. I do not know the vagaries of pre-execute, only the lengths that it must go to. Thus the lesson for today is, when going for complex parametric queries, just use stored procs. Slow they may be, but faster than SSIS.