Perhaps a different way to think about the two-stage upgrade is to decouple data definition from the database and the code? <br><br>Instead of using the record definition in both the persistence as well as the code, you pass the data through a transformation function. That way in the database you have
<br><br>-record(item2, {name, value, ...}).<br><br>But in the code, you have <br><br>-record(item1, {name, ...}).<br><br>In between, you have<br><br>load(DB) -><br> Tuple = read_from_database_structure(),<br> convert_from_item2_to_item1(Tuple).
<br><br>Thus - you can first upgrade your db (care is needed to deal with locking, etc) in a script, and once that's done, you can upgrade code, which is in general a faster operation. Minimize dependencies IMO is effective to isolate and deal with change.
<br><br>Cheers,<br><br><div><span class="gmail_quote">On 10/30/07, <b class="gmail_sendername">David Mercer</b> <<a href="mailto:dmercer@gmail.com">dmercer@gmail.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
My understanding of how a two-stage upgrade would work follows.<br><br>Prior to the upgrade, we have the following record:<br><br>> -record(item1,<br>> { name<br>> }).<br><br>In Stage 1 of the upgrade, support for the following record structure is
<br>added, but the data is not itself updated:<br><br>> -record(item2,<br>> { item<br>> , value<br>> , cost_basis<br>> , gl_class<br>> }).<br><br>Our Stage 1 upgrade code must be written to support both item1's and
<br>item2's, but it does not upgrade the data structures yet. Only once all<br>3,000 nodes of our system have been upgraded to Stage 1 can we initiate<br>Stage 2: upgrade all our 'item1' data structures to 'item2'. Optionally, we
<br>can also purge all references to the 'item1' structure.<br><br>This approach seems problematic for the following reasons: (1) the time and<br>administrative overhead required to release a new version is doubled; (2)
<br>you may run into the situation in which the stages take so long to complete<br>that we have multiple upgrades happening across the system at once; (3) code<br>has to be rewritten every release if it handles items (to accept both
<br>'item1' and 'item2' structures), even if it is not directly affected by the<br>change.<br><br>For example, take the following scenario:<br><br>A new release is ordered which requires a change to the item record. All
<br>the code that deals with items (which is almost all of it) is duplicated and<br>changed to allow it to work with both the old item structure and the new.<br>When this is completed, a Stage 1 upgrade is ordered worldwide. Frankfurt
<br>and Singapore complete their Stage 1 upgrade quickly, but unfortunately our<br>Los Angeles operation is caught up in a legal requirement requiring them to<br>notify a particular client two weeks in advance of any system update.
<br>Meanwhile, our data center in Maputo, Mozambique has been having unspecified<br>"problems" upgrading that is putting the whole worldwide release of these<br>new features on hold. Even after the two-week North American hold is
<br>lifted, Maputo still has not upgraded to Stage 1.<br><br>Meanwhile, while waiting to hear back from Maputo (which continues to<br>demur), Software has released a new version with yet a different version of<br>the item record. Now Frankfurt and Singapore have code running that works
<br>with all three formats, Los Angeles has two, and Maputo is still on its<br>first, and *still* no new functionality has been released.<br><br>Is this the best we can do?<br><br>Cheers,<br><br>David<br><br>-----Original Message-----
<br>From: Sean Hinde [mailto:<a href="mailto:sean.hinde@gmail.com">sean.hinde@gmail.com</a>]<br>Sent: Monday, October 29, 2007 14:25<br>To: <a href="mailto:dmercer@alum.mit.edu">dmercer@alum.mit.edu</a><br>Cc: <a href="mailto:erlang-questions@erlang.org">
erlang-questions@erlang.org</a><br>Subject: Re: [erlang-questions] beginner: Updating Data Structures<br><br>Hi,<br><br>Perhaps you could consider a two stage upgrade. First upgrade all<br>software to a version that understands the new record as well as the
<br>old (dynamically dispatching/converting on record size). Then once<br>that is done invoke the command to tell nodes to start using the new<br>record (perhaps also doing a few table transforms along the way).<br><br>It can sometimes help to use the fact that records are also tagged
<br>tuples. Kind of ugly in the code, but could be isolated to a small<br>number of places.<br><br>Another option we have with some success is to have a single extension<br>field in the record that holds a tagged tuple list. It is
<br>extraordinary how much such a structure can be abused ;-)<br><br>Sean<br><br>On 29 Oct 2007, at 17:26, David Mercer wrote:<br><br>> While an Erlang system has the ability to update its program on the<br>> fly, updating data structures on the fly seems a bit more
<br>> difficult. Unless you can upgrade all nodes simultaneously, some<br>> nodes will be expecting the old data structure while others then<br>> new. My question therefore, is how to structure my data? Is there
<br>> an approach that I am missing that is both upgrade-friendly and ETS/<br>> Mnesia-compatible? Please see the following paragraphs for my<br>> analysis so far.<br>><br>> Suppose we are writing an inventory control application. We decide
<br>> to create a record to contain our information about items in our<br>> inventory. Not much to say about items, really, so we're just going<br>> to hold the item's name in a record. If something else ever needs
<br>> to be tracked regarding these items, we can always upgrade our data,<br>> right?<br>><br>> -record(item,<br>> { name<br>> }).<br>><br>> So we roll out our new inventory system to 3,000 nodes in our 25
<br>> warehouses in 6 different countries, and everything works<br>> swimmingly. For a while.<br>><br>> However, some time later, our accounting department decides we need<br>> a way to value our inventory, and each item should have a value
<br>> associated with it. That way, we can calculate inventory value<br>> simply by multiplying value by quantity at each location.<br>> Unfortunately, we cannot now use our record structure. What to do?<br>>
<br>> Well, naïvely, we decide to just modify our item record.<br>><br>> -record(item,<br>> { name<br>> , value<br>> }).<br>><br>> This new record structure is incompatible with the old item record
<br>> structure, so we will also write some code that upgrades our items<br>> in the system to the new structure when we upgrade the system.<br>> Unfortunately, unless our entire worldwide operation is upgraded all
<br>> at once, any process using the old structure will crash when it<br>> encounters a new-style item, and vice versa. Simultaneous upgrading<br>> all 3,000 nodes is impractical, so we'll have to rethink our<br>
> original decision.<br>><br>> We could have created the original record structure with expansion<br>> slots available for future use.<br>><br>> -record(item,<br>> { name<br>> , 'X1'
<br>> , 'X2'<br>> , 'X3'<br>> }).<br>><br>> Now when Accounting wants us to add the value of the item to the<br>> item record, we simply redefine one of the expansion slots.
<br>><br>> -record(item,<br>> { name<br>> , value<br>> , 'X2'<br>> , 'X3'<br>> }).<br>><br>> This will not crash any process, since the size of resulting tuple
<br>> is still the same. Unfortunately, we might run out of expansion<br>> slots if we don't allocate enough of them. The example runs out of<br>> slots once Accounting also gets their cost-basis and GL-class<br>
> elements added, leaving us in the same boat as before. We simply<br>> delayed the inevitable. We might get bright and allocate the new<br>> slots hierarchically by department, for instance, so Accounting gets
<br>> only one slot for all of its information, and we define a new record<br>> for the information in that slot.<br>><br>> -record(item,<br>> { name<br>> , acctg<br>> , 'X2'
<br>> , 'X3'<br>> }).<br>> -record(acctg_item,<br>> { value<br>> , cost_basis<br>> , gl_class<br>> , 'X1'<br>> , 'X2'<br>> , 'X3'
<br>> }).<br>><br>> However, this approach once again only delays the inevitable. When<br>> Inventory Control and Manufacturing take up the other two expansion<br>> slots, there is no room for Engineering's data. Plus, we have
<br>> multiplied this problem, since it occurs for each of our subrecords,<br>> which can also run out of expansion slots.<br>><br>> Another alternative might be to have only one expansion slot, which<br>> is filled in by the next version of the item record.
<br>><br>> -record(item,<br>> { name<br>> , v2<br>> }).<br>> -record(item_v2,<br>> { value<br>> , cost_basis<br>> , gl_class<br>> , v3<br>> }).
<br>><br>> Now when we have more elements to add, we create an item_v3 record<br>> (with a v4 element to accommodate future expansion), and so on. The<br>> problems with this, however, are that programmers need to know which
<br>> version of the record a certain data element is, and that by the<br>> time we go through a few score enhancements and we're up to version<br>> 68, it becomes quite cumbersome, and is little better than had we
<br>> used a linked list.<br>><br>> In fact, a linked list may well be better. Instead of writing<br>> functions with the record syntax, we can use lists.<br>><br>> item_value([item, _Name, Value | _])<br>
> -><br>> Value<br>> .<br>><br>> To retrieve the value, we only need to know its position in the<br>> list. This approach suffers from a couple of problems: (1) You need
<br>> to know the position of each element in the list; (2) This list will<br>> be repeated quite frequently, so when you have 300 attributes your<br>> code will be brittle, repetitive, and difficult to maintain.
<br>><br>> Perhaps an alternative approach is to define each record version<br>> independently, instead of additively as we tried earlier.<br>><br>> -record(item1,<br>> { name<br>> }).<br>
> -record(item2,<br>> { item<br>> , value<br>> , cost_basis<br>> , gl_class<br>> }).<br>><br>> Now in our code, we have versions of each function matching on the<br>
> record structure, and a function that handles the no-match case (in<br>> case you're running v2 code when you receive a v3 record). Once<br>> again, however, we run into a couple of obstacles: (1) We must<br>> implement a different version of each function for each version of
<br>> the record (this will get tiresome around version 68); (2) new<br>> versions are not backward compatible: a node running a previous<br>> version of the code will not recognize future-versioned data<br>> structures, even though the only fields it needs are those from its
<br>> own version.<br>><br>> Let's borrow a page from object-oriented design principles. Why not<br>> let the item provide its own methods for data access through<br>> functions contained on the structure. We define a record "class"
<br>> which has two slots: one for the methods, and one for the data. By<br>> doing this, items carry around their own methods and so it doesn't<br>> really matter what version of an item something is, so long as the
<br>> item knows how to use its own data. First we define some<br>> infrastructure.<br>><br>> -module(class).<br>> -export([invoke/3]).<br>> -record(class,<br>> { methods<br>> , data
<br>> }).<br>> invoke(Method_ID, Object = #class{methods = Methods}, Args)<br>> -><br>> Method = Methods(Method_ID),<br>> Method(Object, Args)<br>> .
<br>><br>> To call a method on an object, syntax is simply "invoke(Method_ID,<br>> Object, Args)", such as<br>><br>> X = item:new ("X"), % Create a new item "X"<br>> X_Name = class:invoke(get_name, X, []), % Returns "X"
<br>> Y = class:invoke(set_name, X, ["Y"]). % Changes item name<br>><br>> This is great for encapsulation! The implementation is<br>> straightforward.<br>><br>> -module(item).<br>> -export([new/1]).
<br>> -include("class.hrl").<br>> -record(item,<br>> { name<br>> }).<br>><br>> new(Name)<br>> -><br>> #class{ methods = fun(get_name) -> fun get_name/2
<br>> ; (set_name) -> fun set_name/2<br>> end<br>> , data = #item{ name = Name }<br>> }<br>
> .<br>><br>> get_name(#class{data = #item{name = Name}}, _)<br>> -><br>> Name<br>> .<br>><br>> set_name(Object = #class{data = Item}, [Name])<br>
> -><br>> Object#class{data = Item#item{name = Name}}<br>> .<br>><br>> Alas, there is a fly in this ointment, too. While it would appear<br>> that the method functions are being carried around along with the
<br>> data (in fact, the item tuple is "{class,#Fun<item.0.96410792>,<br>> {item,"X"}}"), those functions are really not carried around from<br>> node to node. Instead, Erlang only carries around references to the
<br>> functions. This means if this item shows up on a node where the<br>> function does not exist, an error will occur when a method is invoked.<br>><br>> The fact that you cannot safely sling functions around with your
<br>> data from node to node indicates that perhaps we need a very simple<br>> interface with functions that will never change. Maybe instead of<br>> using records at all, we can use basic OTP library functions to
<br>> associate item properties with their values. Sounds kind of like<br>> what proplists were designed for.<br>><br>> X = [{name, "X"}], % Create a new item "X"<br>> X_Name = proplists:get_value(name, X), % Returns "X"
<br>> Y = [{name, "Y"} | proplists:delete(name, X1)]. % Changes item name<br>><br>> A similar effect can be had with dicts, with the decision probably<br>> to be made based on performance. (Not only that, but the decision
<br>> can be made dynamically at run-time, since there are functions for<br>> converting between the two.)<br>><br>> X = dict:from_list([{name, "X"}]), % Create a new item "X"<br>> X_Name = dict:fetch(name, X), % Returns "X"
<br>> Y = dict:store(name, "Y", X). % Changes item name<br>><br>> This approach has the advantage of being completely backward-<br>> compatible with respect to my code-base. Should a later version of
<br>> our inventory application add a property, it will not change the<br>> operation of any previous version. Once again, however, there are<br>> problems with this approach: (1) property values cannot be used for
<br>> matching in function definitions; (2) these structures are not<br>> easily indexed: ETS and Mnesia require record data types. While<br>> Disadvantage 1 might be easily managed by performing lookups and<br>
> conditionals within the function, Disadvantage 2 is probably<br>> intractable.<br>><br>> To repeat my question, gentle readers, how ought I structure my<br>> data? Is there an approach that I am missing that is both upgrade-
<br>> friendly and ETS/Mnesia-compatible?<br>><br>> Thank-you.<br>><br>> Cheers,<br>><br>> David<br>><br>><br>> _______________________________________________<br>> erlang-questions mailing list
<br>> <a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>> <a href="http://www.erlang.org/mailman/listinfo/erlang-questions">http://www.erlang.org/mailman/listinfo/erlang-questions</a><br>
<br>_______________________________________________<br>erlang-questions mailing list<br><a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br><a href="http://www.erlang.org/mailman/listinfo/erlang-questions">
http://www.erlang.org/mailman/listinfo/erlang-questions</a><br></blockquote></div><br>