...Does that sound about right?...
_______________________
-ewemoa
Hahaha, well it looks like it could be, but it seems that there's probably a lot more going on - or at least implicit in what is going on - than a simple précis such as yours might be able to do justice to.
For example, one of the implications that stood out for me was the potential usefulness of all the tools that were being used in a linked/sequential fashion. The demo showed it all being done manually (by typing commands into the PowerShell interface), complete with errors and then corrections, at the keyboard. The person at the keyboard probably needs to be something like (say) a Grade A system mechanic in the systems being used with current knowledge all in his head as he types - and he did say he had spent a lot of time getting to that point - so there's a dependency right there.
Could it be done by an inexperienced operator/user? Probably not without further automation.
The challenge would thus seem to be to encapsulate/automate all of what he did, as (say) a batch job or (better) via a stable and robust GUI wizard interface.
The OP for this thread requests opinions/thoughts about Chocolatey and using it as a
portable tool. Portability might actually make what already seems to be a powerful and complex toolset even more complex to use, and thus more complex/difficult to automate (e.g., a decision table with too many potential decision branches with unknown exits in the process to be able to easily cater for them all). Furthermore, even if you
did manage to automate it, would the potential impermanence of some of the toolset components frustrate the objective of the wizard GUI?
What I mean by that is, looking at it from a theoretical perspective,
if:
- (a) the AS-IS process steps to achieve a given outcome are undocumented or poorly documented and liable to be changed at short notice in an uncontrolled fashion, then they are Ad hoc (CMM Level 1) - aka Chaotic. It would be a waste of time trying to automate that as the risk would be that by the time you had automated and tested the automation, the process steps could have been changed without your knowledge, pulling the rug out from underneath you, as it were. So that would not be recommended as a cost-effective action.
- (b) the AS-IS process steps to achieve a given outcome are undocumented or poorly documented at best, and used repeatably and are thus more reliable, but still changed in an uncontrolled fashion, then they are Repeatable (CMM Level 2), and though it might seem worthwhile to try to automate that, it still carries the same risk as in (a). So that would not be recommended as a cost-effective action.
- (c) the AS-IS process steps to achieve a given outcome are defined and documented and used repeatably and only changed occasionally in a relatively controlled fashion, then they are Defined (CMM Level 3), and reliable to the extent that it would probably be worthwhile putting the effort into trying to automate the process. So that would be recommended as a cost-effective action.
Things get even better at CMM Levels above that, but - and I could be wrong, of course - I get the impression from the video that the CMM Level in this case was likely to be 1 or 2, but not 3 for some/most of the toolset components - in which case, from a risk-avoidance perspective - you take the lowest CMM Level of any part of the AS-IS process as your LCD (Lowest Common Denominator) and overall CMM level (it's the weakest link). That could be termed as being "Not yet ready for Prime Time", or something.
Therefore, overall, I'd not be too optimistic about the process being something that could be fully automated with (say) a GUI wizard on the front. However, if one had the resources, an experimental approach might still be interesting. Try to do an exploratory "suck-it-and-see" - i.e., build a prototype of the automation Wizard - and see how long it lasts before a change (or successive changes) in the toolset breaks it. The trick then would be to see if you could obtain advance warning of any impending changes, so as to have a fix in place for the Wizard in sufficiently timely fashion as to avoid the Wizard failing.
A bit of a rant:
A current example of an update process with an LCD at CMM Level 1 (
Ad hoc/Chaotic) could be the process for releasing Mozilla Firefox Beta versions. I subscribe to the Beta release channel, and I have to put up with releases coming out like water sputtering out of a hosepipe with air-locks. Just about every release screws something or other up, typically breaking one or several FF add-ons/extensions, and usually for no other better reason than that the probably overworked add-on developers don't have the time/resources to jump when FF says "Jump!" and so don't manage to get the add-on
verified in time for the uncontrolled release schedule.
So the add-ons tab is spattered with disabled add-ons, because some wag at Mozilla has issued a bureaucratic mandate that all add-ons must be verified by Mozilla for each new FF release or the add-on will be disabled, or something.
Wherever you get CMM Level 1 or 2, you can usually identify cost inefficiencies and waste. The above Beta release process is what is often referred to by the euphemism "uncontrolled release management" in ITIL-speak, and is simply nothing more than just bad IT service management practice where the use of the term "management" could be a moot point.
The amount of work it creates (a lot of which may be unnecessary/unproductive) for the add-on developers must be rather like an iceberg, and Mozilla probably isn't paying these third-party developers to dance to their tune either, so it seems to be a cynical cost-transfer or economic externalisation exercise with "all care and no responsibility" on Mozilla's part, and with the developers footing the bill.
Quite a lot of pundits seem to be saying that Mozilla might have had a "cultural collapse" and lost sight of their original objectives, and that this verification dance is likely one of the outcomes from that collapse - and they may be right, but I couldn't possibly comment.