Posted by: scheidydude | January 22, 2014

Empowering Your Developers Through Database Schema Control

As developers, we have all worked with databases in some form or another.  And there are many suggested guidelines for maintaining change control over your database schema.  I am not going to dispute any of those since any procedural control over change is a good thing.  I am, however, going to present my view from an operational support perspective because often the Ops view and the Dev view over control simply do not align.

I have been doing both operational support and application development for many years and have therefore developed my own opinions on how Schema Control should be done. Schema Control, just like Source Control, should be stored centrally, revised, branched, reviewed, tested and secured.  I have some rules for each of these and how they apply to the different types of content in a database schema.  I will try to address each of these, but first let me lay down some simple guidelines.  Simple in theory, but how simple or difficult in execution depends on your platform and resistance from your team.  We are going to force people out of their comfort zone, just a little, but enough that they will initially feel constrained, when in essence what we are trying to do is empower them.

Ground Rules / Guidelines

First: Never use a shared database for development.  This is probably the hardest thing for developers to overcome.  If Tom’s database works, why not just let Frank use it instead of having to have his own?  It’s a single authoritative source for the schema and all the data is valid.  Let’s just use that.  But what if Tom needs to change the “Name” field from 50 characters to 30.  He failed to tell Frank and now Frank’s code fails.  Frank spends hours debugging code that was perfect just days ago.  Imagine similar scenarios across a team of 20 developers.  Sure, we can say Tom should have warned Frank, but we all know that even the best of us sometime fail to communicate every detail.

Second: Have a single authoritative source for the schema.  This will allow Tom and Frank, and every other developer, to have an exact copy of the database.  (Scheme, triggers, stored procedures…everything.  More on this later.)  If you try to have multiple databases, and you will when cloning for another user or moving to QA, you will ultimately end up not knowing which database has the latest correct schema and data.  Having a single authoritative source, and having everyone be able to pull from it, allows all users and all databases to be in sync at any given time.  Everyone should be able to pull the latest version of the schema and deploy it.

Third:  Always version your database.  You must be able to propagate schema changes from development to QA to production in a controlled manner. And just as importantly, you must be able to deploy any version of the database at any time.  Let’s say customer “A” has found a bug in the database, a bug you may or may not have already fixed.  There is no way, short of going on-site and futzing with their production environment, to test and fix such an issue.  You’ve kept previous versions of the application, but how do you deploy and test them without matching revisions of the database schema?

Fourth: Have a small set of users who can merge changes to the schema back into the master for deployment.  People who will ask (even if it’s just reviewing a change control ticket) why this change is being made and is this the best practice for implementing it.

Baseline

Now in order to version your schema you will need to establish a baseline.  Even the best developers and DBA’s can’t do this in a single sitting.  You should let the application development mature for a little while.  Work with a small set of stakeholders and initial developers to establish a starting point.  Build out your schema and let people begin developing against it.  Many developers can mock-up or stub-out code that will simulate actually using the database and these can be used to help define the initial build.  Once your team feels comfortable with the current state, usually just a day or two of work depending on the size of the team, it’s time to grab your baseline.  All major databases have tools that allow you to dump or script the schema.  This script should act as your baseline.  I’ll get into naming conventions later.  After the baseline has been created, all other changes to the schema should require a change script, not a baseline.  However, you can re-baseline at anytime, but I strongly suggest that this only coincide with new major version releases.

Now for my personal “strong” opinions on what types of data goes in which files.

The baseline schema file should contain only the necessary scripts to create the database schema itself.  This includes tables, indexes and relationships.  And that is it.  I don’t even include the ability to drop the tables and database, as that is a destructive process, and creating, as the name infers, is a constructive one.  Separate scripts should be run from dropping existing databases, thus preventing the accidental destruction of a production database.  The baseline script should be created for the base of every release branch.  Not, I repeat, NOT for every feature branch.  Any changes required for features are just that, changes.  All schema modifications and referential requirements for a feature branch should be part of a change script.  And a change script, as it is part of a feature branch, by definition should not make any breaking changes.

Additional required elements such as triggers, stored procedures, functions and other “programmatic” items should be stored in separate files.  Preferable one file per item.  I often include the IF EXIST DROP process in these as the intent is to be able to recreate (destruct then construct) these items when needed.  These files are your first “change scripts” for the database.

Now there are two types of data in a database required for it to function as desired.  These are “referential” data and “dynamic” data.  Referential data is used by lookup processes or drop-down options to enforce consist data quality.  These could be the names of states, cities, zip codes, or essentially any known content that must remain consistent (as defined by the application architecture).  Dynamic data is any content that cannot be known, or may be so large and varied that pre-populating or constraining input requirements is prohibitively difficult.  You will need a script to populate Referential data.  I’ll address Dynamic data later.

A separate script should be created for the population of referential data.  This script, while making a change to the content of the database, does not make a change to the schema or any programmatic items.  Therefore it should only include INSERT, UPDATE or SELECT statements.  If you noticed I did not include DELETE.  That’s because I believe under most circumstances, say 99% of the time, you should never delete referential data.  If it’s no longer valid, add a column, mark it invalid and handle it in the application code.  The integrity of the data is the highest priority.  However, you have to plan on future releases where this data never existed, so why have non-referenced data.  But that should be handled when managing your source control branches.  But what about that other 1%?  Well, we all make mistakes.  Sometimes you have to correct a bug that requires removing the invalid data.  As long as the script corrects the foreign keys, and if you designed your database right it will have to, then DELETE should work.

Dynamic data is a different beast.  This data is required to test the application.  Under no circumstances should dynamic data be scripted into a Production deployment.  At least not as part of Schema Control. Dynamic data, also referred to as seed data, should be part of the application’s Source Control, even if it still requires a DBA to execute it.  But most programmers can work around that requirement.

One last thing to note on creating your baseline, and any of the schema/data scripts, you should never hard code the database name in your scripts.  The name of the database should always be passed as a parameter.  This will ensure that the wrong database is not affected and will allow for multiple databases to co-exist on the same server.  The same rule applies for the creation of users.  Each user should be restricted to its own associated database ensuring your code is hitting exactly what you think it’s hitting.

Tracking Changes

If you’re going to go through all the trouble to enforce Schema Control then you will want to reap the benefits.  Nothing is more frustrating than when asked “Is Development the same version as Production?” and having to answer “I don’t know”.  This knowledge is now easily tracked without the need to run database comparison tools, or the dreaded “stare-and-compare”.

To prevent that I add a table to the database called “SchemaChanges” with the columns ID, Major, Minor, Release, Build, ScriptName and DataApplied.  Every script that changes the schema or is run due to requirements of changes to the schema must write to the table as its last step.  Since we need to know the state of our schema at any given time, we can query the results from the last script run against the SchemaChanges table.

We are also going to track any scripts that seed data or migrate data.  But we will do this in another table called “ContentChanges” with the columns ID, Major, Minor, Release, Build, ScriptName and DateApplied.  Now, even though the structure of these tables is exactly the same, the content is not.  You can easily run multiple migration and data modification scripts without ever needing a schema change.  Pre-populating or Seeding tables so that valid test data exists is not critical to the schema.  But needing to know if or when it was done could be helpful.  Also, these types of processes can be done by any developer and need not be limited to the DBA function.

File Naming Conventions

Now let’s get to the fun stuff (I really have to find a new hobby).  Setting a standard, and following it, for file naming is essential.  Why?  Because we are human.  We make mistakes.  And interpreting the meaning of a file name is as difficult as guessing the summary of a book from the title on its cover.  So I like to set the standard for filenames early in a project and make it stick.

I use a hyphenated style of file naming pattern.  I say “style” as I use the underscore “_” character since many scripting languages, which you might use to automate the process, interpret the hyphen as a minus.  Other than that, the pattern I use is simple enough.

The first portion of the name is the “functional” name of the database.  By functional I mean the purpose of the database and not the actual name used to reference the database.  As an example your database’s function may be to hold CRM data, but the name of the database may be “MyCompanyCRM”.  In this case, we want to use “CRM” and not “MyCompanyCRM”.

The second portion would be the purpose of the script.  For Schema Control there are 5 types of purposes.  They are: BaseLine, Trigger, StoredProcedure, Function and SchemaChange.  These purposes of these should be self evident and follow the guidelines above.

The third portion is the version.  The versioning system used should follow what is tracked by the SchemaChanges table.  These consist of Major, Minor, Release and Build numbers.  Of special note is that Major, Minor and Release should align with the Applications Source Control versioning scheme.

Below is an example of the five types of names I would use and their purposes presented in the order in which they should be applied to the schema.

Purpose  Number of files  File Name Example 
Baseline 1 file per Release Branch DBname-BaseLine-version.sql
Function 1 file per Function DBname-Funcion-version.sql
Stored Procedure 1 file per Stored Procedure DBname-StoredProcedure-version.sql
Trigger 1 file per Trigger DBname-Trigger-version.sql
Schema Change 1 file per Feature Branch DBname-SchemaChange-verison.sql

The way in which you execute these scripts will vary based on your OS and DB Engine choice.  But the naming conventions and script contents should be applicable across the board.

Here are some other suggestions on Common Sense Guidelines:

  • A new Baseline should not include deprecated schema structure.  If it’s not required by the application, it should not be included in the schema.
  • No changes at the Release level should be a breaking change, ever.  If it breaks the version then it’s time to increment the release at the Minor level.
  • Provide a Migration path from version to version.  No client will ever want to manually re-enter their data in order to get the new version.  The process for these changes should be:
    1. Create New Database
    2. Migrate Content
    3. Test Application
    4. Cleanup
Posted by: scheidydude | July 31, 2013

When Good is simply Good Enough.

I consider myself a creative person.  I enjoy drawing, sculpting, painting, etching and various other practices in the visual arts.  I also enjoy writing and story telling (i.e. improvisational stories).

There is also another creative side to me.  One that many non-techie people can’t quite understand.  I enjoy designing and deploying working solutions.  Be it a network, domain structure, or software application and database.  Those creations, while not “artistic” in the common man’s eye, are still creative achievements and, when done well, are a work of beauty.

Now, this isn’t a post to convince people that technology is art.  That’s just a fact.  No, what I am trying to convey here is that every achievement has various stages of development.

A painting starts with a canvas, then a base coat, then a base color wash.  Often at this stage simple guides and outlines are added to provide shape and flow.  Then more details and colors.  This process is layered on over time until the desired effect is created.  And that’s where it gets difficult, or rather more difficult.  An artist of good quality has to know when to stop.  There is a point upon which the work is done.  Adding more details, more colors and features will actually detract from the experience we get from viewing such works of art.  The artist also has to know which details to leave out and which ones are critical for the story and emotion that they are trying to express.  It’s easy, far too easy, to overwork the art.

The same is true for software.  The artist, or developer in this case, must be able to determine which features are critical.  Which features are a nice to have and which features will detract from the product.

As developers and software designers it is easy to over work the product.  As we create and code the product,  we often think of new features that would be simple enough to add.  But every feature takes time.   Time to code, time to test, time to document.  And every feature needs a way to implement it.  Which means more menus, more buttons, more code, more tests and more documentation.

The point I’m trying to make here is that time adds up.  While we’ve been striving for the perfect feature rich application, the customer has moved on.  They’ve found another product that does just what they needed, as minimalistic as it may have been.  While people will wait for the perfect product, history has taught us they will only wait until someone gives them an alternative.  And most consumers will not leave a good tried-and-true product for the promise of perfection.  And if that “perfection” requires time and effort to switch to then you’re almost guaranteed rejection.  Give them”good” now.  Make your “good” their tried-and-true product.  You can give them perfect later.

Posted by: scheidydude | July 30, 2013

Self Motivation – or The Art of Being Stupid

Self Motivation – or The Art of Being Stupid.  (re-post from 2011)

Now, take a moment and ponder what I could possible mean by that title.

Good.  Now forget it.  Because that is not what I meant.  So let me now lead you into the whirlwind mess that is inside my head.

Have you ever had a project so overwhelming, so daunting, that the sheer idea of starting it sends you into conniptions?  I have.  I am standing on the precipice of just such a project.  And this isn’t the first time I’ve faced this particular project.  Every year at this time it pops up.  Some years are easier than others.  Some, not so much.  This year is to be the worst, although in the end it will make each subsequent year easier.  But in any case, it’s not a project I want to undertake.  The mere fact that I know the size of the project makes undertaking the smaller tasks difficult.  If I were “ignorant” of the overall scope I might be able to focus.

So what do you do about it?  well, I have a theory.  sometimes it even works.  Have you ever heard someone say “you can’t see the forest for the trees”?  Simply put, you don’t know how large the project is because your focused on the current task in front of you. Well, I intend to use that “focus” to my advantage.  See, I know how large the project is.  So to complete the project, I am not going to focus on it.  But rather, I am going to motivate myself to focus on each task individually, and let each completed task motivate me on to the next.  In that way, I can enjoy the trees without fearing the forest.

Now, if I could just find my calamine lotion.

By now you are probably feeling a little overwhelmed by what, at least on the surface, appeared to be a short term project.  But don’t loose focus on the end goal here.  We want to reduce your overall costs associated with maintaining your IT.

So let’s look at some of the benefits I listed for making the move.  On the top of the list is reducing staff.  Everyone thinks of this first, but if you’re like a lot of small businesses, you may only have one person in IT.  Don’t expect to get rid of that single person.  Do expect them to be able to get more done now that they don’t have to handle the maintenance tasks you probably never even know about.  I am speaking from experience here, in the past 40% of my job is completely transparent to the end user.  That same 40% is usually why I’m up at midnight trying not impact the end user.  So if your single IT person is like me, they might not appear to get more work done, but they will look a lot more rested and be a lot more pleasant to work with.

The next three items are directly related.  You will see a reduction in in-house hardware requirements.  This means fewer servers, and fewer network requirements to keep them functioning.  Fewer servers means lower power consumption as well as a lower amount for your property insurance.

Now, here’s one I haven’t mention before.  Moving your resources to the Cloud does not mean simply hosting your servers off-site.  Although this is a possible scenario and a cost savings, it isn’t the best.  When you move to the Cloud you should consider making the move to Cloud Computing.  Cloud Computing is a “pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” (See the National Institute of Standards and Technology (NIST)  definition of Cloud Computing.)   To help clarify this, lets assume you discover that you need another server installed to handle an increase in workload.  In the past you would need to research, order, install, configure and roll-out the necessary hardware and software internally while making sure to allocate power, cooling and physical space appropriately.  With Cloud Computing you log into your providers portal and allocate a virtual resource.  All other physical requirements have been previously prepared for.  So usually within hours, often just a few minutes, your new server is online and handling the increase in workload.  And, once the increase has subsided, you can release that resource back into the Cloud, freeing it up and returning your costs to the level before the increased workload occurred.

This is the most alluring thing about existing in the Cloud.  You can dynamically add and remove resources as your needs change.  In the past, once that resource was on your books, it stayed on your books.

Also, by moving to the Cloud, if some disaster (hurricane, flood, earthquake, fire…or even a simple power outage) were to occur, your IT resources will be off-site and most likely unaffected.  Remote users and clients may not even know you’ve had an issue.

That’s the end of my list of advantages to moving to the Cloud.  There are more, but I’ll stop there.  Hopefully the last one will keep you perky while we look at some of the drawbacks to moving to the Cloud.

Actually, the first one is not really a drawback, but does need to be considered.  People need to be trained.  Your resources, which are now in the Cloud, still need to be managed.  Someone has to know how to add and remove users, how to add and remove virtual servers, and how to setup and restrict access to the resources.  In addition, end users need to know how to access these new resources, be it email, files or applications.  But keep in mind, in the Cloud or not, when technology changes, people still need to be trained.

The last two concerns are vendor related.  No matter which vendor you choose, you will need to consider how these will affect you.  In order to host anything, whether it’s a single page website or gigabytes of email, you will have to enter into a contract with a vendor.  Of course I’ve just stated the obvious.  But what might not be so blaring obvious is the new constraints that you probably don’t face with internal IT.

First is schedules.  Your vendor is also the vendor for countless other clients.  So if they need to make a change (patch, upgrade, replace…) they have to consider how to do so while affecting the least amount of their clients.  If you are affected while a few dozen others are not, they will be apologetic, but that’s how the numbers fall.  However, if you remember that when asking internal IT to do maintenance during off hours you may have received some resistance.  With a vendor, they have already figured out when the best off-hours are and you probably won’t affected at all.

The second item is the contract itself.  Just like any other service contract, it will list what you should and shouldn’t expect.  It will also provide you a way out of the contract if they don’t meet your expectations.  Unfortunately, your expectations may not match what’s in the contract.  Do your research.  Make sure you understand the language and techno-jargon used to define those expectations.

Lastly, and one that affects all the services you get from the Cloud, is bandwidth.  Bandwidth, the speed at which you are connected to the Internet, will affect your service.  Your remote users and clients may see an increase when accessing your Cloud based services.  But the people located in the office where the internal resources used to be will not.  They will experience the same speed your remote users do.  This will be seen as a drawback, probably a large drawback, by those internal and a benefit by those external.  It’s a balancing act.  One that must be managed both physically and mentally.  You should make sure everyone, and I mean everyone, understands what is changing and why.  How the move to the Cloud is a benefit to everyone.

OK, so now you really know what you spend, in salary and time, maintaining and updating your technological infrastructure.  You want to re-coup some of that expense and re-focus your efforts. 

So what does it take to move to the Cloud?  This is a loaded question.  First you have to decide what you want to move.  Simply saying “IT” is the wrong answer.  IT, which stands for Information Technology (if you don’t know that why are you reading this?), encompasses more than just any one service.  It’s more than just that button we click to check our email.  It’s the files we save, the phones we use, the voice mail we listen too, the numerous applications we run hundreds of times a day without a second thought to how they work or where they reside.  So saying “move our IT to the Cloud” could easily cost more time, effort, and money than could ever be saved.  No, each sub-department of your IT (and each sub-sub-department, depending on how big your IT requirements are) is another project that should be considered, individually, before deciding to move it to the cloud.

Even after you’ve decided what to move to the cloud, there are still costs associated with the actual move itself.  There is no magic button to press and “poof” you’re in the Cloud.  There are different criteria for each IT resource that you want to move to the Cloud.  Email has to be migrated and client software updated to recognize the new servers.  Other servers on the Internet have to know that your email domain has moved.  Files have to be migrated and drive mappings updated or web-access setup.  Backup procedures have to be defined and tested, and re-tested.  Permissions, for all of these, have to be assigned.  Domain trusts setup so each user doesn’t have to authenticate to every new resource.  And the hardest of all, users have to be trained.  Everything they’ve done, everything they’ve learned, has just changed.  And like most small-businesses, time is money, schedules are tight, and people miss training sessions.

So each resource that you want to move to the cloud may require a separate project for that move.  And once it is moved, there are still more costs associated with maintaining your existence in the Cloud.

I can hear that collective “WHAT, I moved most of my hard resources to the Cloud.  I reduced my staff or re-focused team members so they could add more value to our core business.  And now you tell me I still have to spend time and money just to stay in the Cloud.”  Yes, that is exactly what I’m saying.  By moving to the Cloud, you were able to shuffle resources (people) and reduce expenditures (hardware/software) so that you could essentially increase your bottom line.  But just because your “In the Cloud” doesn’t mean your IT costs are completely gone.  While your IT staff might not have to test and deploy new hardware and software, patch or repair servers, they will still need to be trained on how to manage the new Cloud based resources (adding and removing users, mailboxes and file shares, backing up and restoring data).  These tasks still exist, they are just done differently.  And they will need to coordinate with your service providers for when upgrades, patches and repairs are required.

OK, now uncross your eyes, take a deep breadth, and think of daisies in the field, and relax.  Are you relaxed?  Good.  Now stop that and get back to work.

So let me list the Pros and Cons for moving to the Cloud, as I see them.

PROS:
Reduced staff (I prefer re-focused staff)
Reduced in-house hardware (server) requirements
Reduced power consumption
Reduced property insurance
Ability to dynamically allocate and/or move resources.

CONS
Training End Users
Training IT Support Staff
Confined to Vendor Schedules
Confined to Vendor Contracts
Bandwidth Constraints

Yes, I added a few items that I haven’t covered.  Some are obvious, others not so much.  In my next post I’ll go into a little more depth on the “not so much” group.

So you want to make the move to the Cloud.  You’ve looked at your numbers.  You know what you spend, in salary and time, maintaining and updating your technology infrastructure.  Now read that last sentence again.  This is the most important step.  Many small businesses do not truly know what it costs them to maintain their infrastructure.  Nor do they often budget for future growth or obsolescence.  So take your time and get this step done right. 

So, what is your IT costing you?  How do you estimate cost and value beyond the numbers already on your balance sheet?

Estimating the cost and value of your information technology is not an entirely new process.  The same investments (sounds better than costs) in education and continuing education that apply to office staff, associates, trainees and even mid to C level officers also applies to your IT staff.  In some ways it even applies at an accelerated pace.

Technology often changes at an exponential rate (though that doesn’t mean we need to be on that cutting edge).  For small businesses, IT equipment is mostly purchased and capitalized on the books over a period of several years.  This practice doesn’t quite fit anymore.  Most consumer products and even low-end enterprise level laptops and desktops shouldn’t be on the books for more than 3 years.  Advancements in technology can quickly move these investments into obsolescence. While the cost of a laptop or desktop is not minute, the cost of a server and its associated operating system and applications are large in comparison.  And yet that server and software can easily be obsolete in 3 to 5 years.  To carry the cost of that technology on the books beyond that time is to carry a loss with no offsetting asset.  And just as that server, operating system or application may need to be updated, so do the skills your IT staff has in order to support those updates.  And, while companies were capitalizing the cost of the equipment longer than the life of the assets, they also were not budgeting for training staff on the updates that they did not see coming.

So on your books you already have your hard assets (server, operating systems and applications) and soft assets (employee salaries – and let’s face it, your staff, while an expense, are assets to the company – you wouldn’t want to have to train new employees to the unique way you do business).  But if you look further down the road, say within the next 3 years (most likely sooner if your considering moving to the Cloud), then you will also have the expense of upgrading or replacing your existing servers and applications as well as the expense of training your staff (both IT and end users).

Now we have your hard assets, soft assets, and future expenditures (hardware upgrades, OS upgrades, applications upgrades, and staff training).  Add them all up and there you have it, your current estimated cost for your Information Technology.

I didn’t get into the Pros and Cons yet, but we needed a good starting place, and knowing your current and future costs to maintain what you have is the best place.

I promise I’ll get into the Pros and Cons in the next post.

Posted by: scheidydude | March 9, 2011

Small Business and the Cloud (part 1)…Why is it so foggy?

What is the Cloud?  I’ve been asked this a few times.  By friends, family, small business owners; people who are more focused on their lives and livelihood than they are on the latest (or not so latest) trends in technology. 

A cloud is a mass suspended in an atmosphere.  How could this possible relate to the Internet.  Well, this relationship actually goes back further than the Internet.  The term cloud, or more specifically, the diagram of a cloud has been used for decades to refer to a distributed network of inter-related services such as electrical power grids and telephone networks.  Instead of drawing out every node covering the entire network, architects would draw a cloud representing the larger external network used to connect the nodes they were concerned with.  As Bulletin Board Systems (BBS) and dial-up modems slowly gave way to Internet Service Providers (ISP) and broadband connections, the cloud diagram naturally fit to represent the Internet.  So, by that association, the Cloud is the Internet.

However, that is but the simplest of definitions.  The Cloud represents so much more.  Cloud Computing, Cloud based Storage, Cloud based Applications, Software as a Service…the list is potentially endless.  What I want to attempt to answer here is the most common question I am asked.  What do they mean by “moving to the Cloud”?

In essence, any company that offers Internet access (access without the need of specialized software or hardware) to corporate resources (be it email, files, contacts or more) is in the Cloud.  But that is exactly the opposite of what they mean by “moving to the Cloud”.

“Moving to the Cloud” means moving those corporate resources such as email, files, backups, applications, etc…, to non-corporate assets hosted in the Cloud.  Applications, email, files, almost all IT related functions and internal expenditures can be moved to the Cloud.  This allows companies to reduce or refocus staff, remove technical liabilities and overhead (server room cooling, electrical, even insurance), and concentrate on adding value to the business instead of supporting what could become aging software and hardware dependencies.

But as nice as that may sound for the bottom line of your financial statement, it’s not that simple, or that quick.

In my next post I will address (what I see as) the pros, cons and possible pit-falls for moving to the Cloud, especially for the small to mid-sized business.

Posted by: scheidydude | March 8, 2011

Golden Rules for Coding

1. If you open it, close it.
2. If you create it, dispose of it.
3. If you unlock it, lock it up.
4. If you break it, admit it.
5. If you can’t fix it, call someone who can.
6. If you use it, declare it.
7. If you value it, document it.
8. If you make a mess, re-factor it.
9. If you check it out, check it back in.
10. If it belongs to someone else, get permission to use it.
11. If you don’t know how to operate it, read the wiki.
12. If it’s none of your business, don’t ask questions.

Posted by: scheidydude | March 8, 2011

Golden Rules for Living

1. If you open it, close it.
2. If you turn it on, turn it off.
3. If you unlock it, lock it up.
4. If you break it, admit it.
5. If you can’t fix it, call someone who can.
6. If you borrow it, return it.
7. If you value it, take care of it.
8. If you make a mess, clean it up.
9. If you move it, put it back.
10. If it belongs to someone else, get permission to use it.
11. If you don’t know how to operate it, leave it alone.
12. If it’s none of your business, don’t ask questions.

Posted by: scheidydude | December 15, 2010

Going from ‘Duh’ to ‘Dumb’

We’ve all had those moments when the obvious just simply wasn’t so obvious.  Things that should have been clear were blurred by the distractions of the moment.  Things like stopping at a green light because it’s been red every other time you were there, stopping at a stop sign and waiting for it to turn green because you were having a conversation, or answering the phone while still finishing another conversation leaving the caller wondering why your mother needed 2 gallons of extra virgin olive oil.

The simple fact is we live in a high paced world.  We are used to, if not even raised to, perform multiple tasks at once.  But there are times when these tasks should have all of our attention.  Driving has become second nature to most of us.  Sometimes you may even fail to recall how you got to where you were going simply because you’ve driven that same way every day for years.  But if you change the route, hear sirens or see something that normally isn’t there, all of a sudden you pay more attention.

It’s called Focus.  We have to be able to focus even when the distractions are there.  When you were learning to drive you gave it all your focus.  When you were taking your driver’s test you gave it all your focus.  When you were climbing out of that second story window before her husband made it up the stairs you gave it all your focus.  All I want when you ask me to explain something is that you give me all your focus.

Case in point – I had a conversation about a new feature I added to an application to make some procedures easier.  The conversation went something like this:

User: “How do I send an email from here?”

Me: “Click the Send Email button right there.”

User:  “Oh, I see, like this. Duh!  Wait, my phones ringing.”

Meanwhile a message is created in Outlook.

User: “Hold on, the IT guy is here.”

The user adds some notes and sends the email.

User has now returned to the application.

User: “Why does that button say Send Email?”

Me: “Weren’t you here a second ago?”

FOCUS

Older Posts »

Categories

Design a site like this with WordPress.com
Get started