ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > General Software Discussion

Chocolatey...opinions? portable?

<< < (8/10) > >>

panzer:
Thanks to panzer for pointing it out :)
-ewemoa (September 19, 2015, 06:27 PM)
--- End quote ---

You are welcome.

IainB:
...Does that sound about right?...
_______________________
-ewemoa (September 19, 2015, 09:47 PM)
--- End quote ---

Hahaha, well it looks like it could be, but it seems that there's probably a lot more going on - or at least implicit in what is going on - than a simple précis such as yours might be able to do justice to.
For example, one of the implications that stood out for me was the potential usefulness of all the tools that were being used in a linked/sequential fashion. The demo showed it all being done manually (by typing commands into the PowerShell interface), complete with errors and then corrections, at the keyboard. The person at the keyboard probably needs to be something like (say) a Grade A system mechanic in the systems being used with current knowledge all in his head as he types - and he did say he had spent a lot of time getting to that point - so there's a dependency right there.

Could it be done by an inexperienced operator/user? Probably not without further automation.
The challenge would thus seem to be to encapsulate/automate all of what he did, as (say) a batch job or (better) via a stable and robust GUI wizard interface.
The OP for this thread requests opinions/thoughts about Chocolatey and using it as a portable tool. Portability might actually make what already seems to be a powerful and complex toolset even more complex to use, and thus more complex/difficult to automate (e.g., a decision table with too many potential decision branches with unknown exits in the process to be able to easily cater for them all). Furthermore, even if you did manage to automate it, would the potential impermanence of some of the toolset components frustrate the objective of the wizard GUI?

What I mean by that is, looking at it from a theoretical perspective, if:

* (a) the AS-IS process steps to achieve a given outcome are undocumented or poorly documented and liable to be changed at short notice in an uncontrolled fashion, then they are Ad hoc (CMM Level 1) - aka Chaotic. It would be a waste of time trying to automate that as the risk would be that by the time you had automated and tested the automation, the process steps could have been changed without your knowledge, pulling the rug out from underneath you, as it were. So that would not be recommended as a cost-effective action.


* (b) the AS-IS process steps to achieve a given outcome are undocumented or poorly documented at best, and used repeatably and are thus more reliable, but still changed in an uncontrolled fashion, then they are Repeatable (CMM Level 2), and though it might seem worthwhile to try to automate that, it still carries the same risk as in (a). So that would not be recommended as a cost-effective action.


* (c) the AS-IS process steps to achieve a given outcome are defined and documented and used repeatably and only changed occasionally in a relatively controlled fashion, then they are Defined (CMM Level 3), and reliable to the extent that it would probably be worthwhile putting the effort into trying to automate the process. So that would be recommended as a cost-effective action.
Things get even better at CMM Levels above that, but - and I could be wrong, of course - I get the impression from the video that the CMM Level in this case was likely to be 1 or 2, but not 3 for some/most of the toolset components - in which case, from a risk-avoidance perspective - you take the lowest CMM Level of any part of the AS-IS process as your LCD (Lowest Common Denominator) and overall CMM level (it's the weakest link). That could be termed as being "Not yet ready for Prime Time", or something.

Therefore, overall, I'd not be too optimistic about the process being something that could be fully automated with (say) a GUI wizard on the front. However, if one had the resources, an experimental approach might still be interesting. Try to do an exploratory "suck-it-and-see" - i.e., build a prototype of the automation Wizard - and see how long it lasts before a change (or successive changes) in the toolset breaks it. The trick then would be to see if you could obtain advance warning of any impending changes, so as to have a fix in place for the Wizard in sufficiently timely fashion as to avoid the Wizard failing.

A bit of a rant:
A current example of an update process with an LCD at CMM Level 1 (Ad hoc/Chaotic) could be the process for releasing Mozilla Firefox Beta versions. I subscribe to the Beta release channel, and I have to put up with releases coming out like water sputtering out of a hosepipe with air-locks. Just about every release screws something or other up, typically breaking one or several FF add-ons/extensions, and usually for no other better reason than that the probably overworked add-on developers don't have the time/resources to jump when FF says "Jump!" and so don't manage to get the add-on verified in time for the uncontrolled release schedule.
So the add-ons tab is spattered with disabled add-ons, because some wag at Mozilla has issued a bureaucratic mandate that all add-ons must be verified by Mozilla for each new FF release or the add-on will be disabled, or something.

Wherever you get CMM Level 1 or 2, you can usually identify cost inefficiencies and waste. The above Beta release process is what is often referred to by the euphemism "uncontrolled release management" in ITIL-speak, and is simply nothing more than just bad IT service management practice where the use of the term "management" could be a moot point.
The amount of work it creates (a lot of which may be unnecessary/unproductive) for the add-on developers must be rather like an iceberg, and Mozilla probably isn't paying these third-party developers to dance to their tune either, so it seems to be a cynical cost-transfer or economic externalisation exercise with "all care and no responsibility" on Mozilla's part, and with the developers footing the bill.
Quite a lot of pundits seem to be saying that Mozilla might have had a "cultural collapse" and lost sight of their original objectives, and that this verification dance is likely one of the outcomes from that collapse - and they may be right, but I couldn't possibly comment.

wraith808:
I'd seen that one and discarded it also for some reason, which was the reason I didn't mention it.
-wraith808 (September 18, 2015, 09:52 PM)
--- End quote ---

He he.  When I try to investigate it, my brain cringes at the seeming amount of effort involved in assessing it :)  Source is available though (at least partly Python?).
-ewemoa (September 19, 2015, 01:42 AM)
--- End quote ---

I remember why now.  With no English documentation, I didn't feel I wanted to go in that direction.  Superficial... maybe?  But definitely a breaking point for me.

ewemoa:
With no English documentation, I didn't feel I wanted to go in that direction.  Superficial... maybe?  But definitely a breaking point for me.
-wraith808 (September 21, 2015, 12:49 PM)
--- End quote ---

Being able to learn and then continue to do so about a topic seems an important criteria (especially for things that keep changing like software), so if the info is in a language one does not know, then that doesn't sound superficial to me :)

ewemoa:
I'm not sure I can digest so much at once but I'll respond to what I can ATM :)

For example, one of the implications that stood out for me was the potential usefulness of all the tools that were being used in a linked/sequential fashion. The demo showed it all being done manually (by typing commands into the PowerShell interface), complete with errors and then corrections, at the keyboard. The person at the keyboard probably needs to be something like (say) a Grade A system mechanic in the systems being used with current knowledge all in his head as he types - and he did say he had spent a lot of time getting to that point - so there's a dependency right there.
-IainB (September 21, 2015, 12:39 PM)
--- End quote ---

Perhaps you're hinting that current technology has a human involved at some point -- and that's a dependency.  What actions the person decides on (can at least in retrospect) be viewed as a program that person executed.  Roughly speaking, I'd guess that accurate documentation means that other people are able to execute appropriate instructions (as well as adapt them to their needs).  So the instructions are distributed partly in humans and partly in machines -- depending on the system what the distribution of the instructions is differs I guess.

The video we watched seems to be in the territory of what I'd guess programmers and system administrators would feel capable of "decoding" -- a form of documentation.

I'd guess it's quite normal for something of this nature to not have documentation that's spelled out tidily -- but apart from docs, who knows how well the software behaves in practice (it might be fine, just haven't tested)!  Single programmer working for fun in spare time and all :)

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version