Welcome,
Guest
|
|
|
TOPIC:
Draft texts for TTCN-3 edition 2 19 Sep 2001 14:47 #5936
|
Hi folks,
Since there have been no answers to this mail, I send it again as it raises some issues which are very important in my view. Forwarded message Here are some comments from us folks at the TU-Berlin concerning the draft of edition 2 of the standard: First some errata: First of all, the 'changed keyword list' should be changed :-) Delete altcontroltype toalt afteralt nextdefault Add repeat In the Teststeps chapter in ZZ.1 in the second example, in the 'alt.AnotherTestStep()' the 'alt.' should be removed. Now to some more serious comments: In the BNF: I would strongly suggest that the length restriction for 'record of' and 'set of' types should have the same syntax as that for string subtypes (where the length restriction is placed BEHIND the defined type identifier instead of BEFORE it). All these types are some sort of 'list' type (which is why they share the length restriction) and thus should be used similarly so as not to confuse the user of the language. In Section 6.7: a) Why are set types and set of types only compatible with other set types? Personally, I would opt that at least 'record of' and 'set of' types should be compatible (if the length restrictions match) and that type compatibility should be transitive, i.e. if a 'record' type is compatible with a 'record of' type and that is again compatible with a 'set of' type, then the first should also be compatible with the last. For example, the types: type record T1 { integer f1, ..., integer fn } is compatible with type record of integer T2 length(n) This again (in my view) should be compatible to type set of integer T3 length(n) which again, of course is compatible to type set T4 { integer g1, ..., integer gn } This should not be TOO surprising for the user, since all of these types can take the same denotation, i.e. { V1, ..., Vn }. b) What does it mean that the compatibility rules for set types are 'identical to records except the fields in the structure can be in any order' when the 'identical order' of the types in record types is (in my view) the only criterium for record type compatibility? This would have the following implications (in my understanding): type set A { integer a, boolean b } and type set B { integer b, boolean a } are compatible (identical). This does not seem to pose a problem at first glance, BUT: type set C { integer a, integer b } and type set D { integer b, integer a } are - of course - compatible as well, so if I write: var C v1; var D v2 := { 1, 2 }; v1 := v2; If the order is irrelevant, then it is not clear which field is assigned to which, i.e either v1.a := v2.b and v1.b := v2.a or v1.a := v2.a and v1.b := v2.b. The latter would be consistent with the example for set type compatibility given which seems to imply that set types are compatible if the NAMES and their respective types are compatible (which would make a whole lot of sense). However, if the compatibility rules are the same as for record types, than the former assignment would be more consistent (although this would be inconsistent with the example given in the draft). In the latter case (assuming the example in the draft should not be inconsistent and the names as in record types be irrelevant, since it is not stated otherwise), type compatibility could statically only be decidable with an (in the worst case) exponential algorithm (i.e. all permutations of fields must be matched for compatibility): Assume that Ti is only compatible to Ti, then type set F { T1 f1, ... Tn fn } would be compatible to type set G { Tn gn, ... T1 g1 } but you would need faculty(n) (i.e. n * (n-1) * ... * 2) compatibility checks to find out that they are incompatible (this is especially indesirable if I have large set types which actually are similar, but still incompatible). (If it is possible to find an efficient algorithm to find the equivalence classes for types, then a more efficient algorithm for type compatibility can be found as well) Also, if the order is irrelevant, then the compiler has to decide arbitrarily which compatible field to assign to which (or this would have to be explicitely stated in the standard). From a language designer and compiler constructors point of view, I would suggest that the 'matching order' criterium for record types be substituted by the 'matching name' criterium for set types (i.e. for set types to be compatible, they must have the same number of fields, the same field names in any order and for each field name a compatible type for that field). Then, the compatibility of two set types can be decided in linear time. Also, if the list denotation (i.e. { V1, ..., Vn }) is used for the value of a 'set' type, the 'order' criterium should be used (i.e. Vi must be of the i-th field type of the set type) for assignment of the set fields, because otherwise, you have the same complexity/arbitrarity problem to decide which value should be assigned to which field as in the type compatibility check (which, in essence would have to be invoked for this decision). c) On another note, the operator precedence table (in the normal standard) does not seem to make a whole lot of sense to us: - why do bitwise operations on strings have a lower priority than do relational operations? This would mean that if I want to compare two results of a bitwise string operation, I would have to always put () around these, although nothing else makes sense: s1 or4b s2 != s3 and4b s4 makes only as (s1 or4b s2) != (s3 and4b s4) sense (while it means s1 or4b ((s2 != s3) and4b s4) in the standard) I think, the bitwise operations should have a higher priority then the shift/rotate operations (with which they do not clash, because they take a string and an integer operator). Then, it would be feasible to write something like: s1 or4b s2 >> 3 == s3 and4b s4 <@ 5 which would mean (s1 or4b s2) >> 3 == (s3 and4b s4) <@ 5 instead of s1 or4b ((s2 >> 3) == s3) and4b (s4 <@ 5) - why does the & (concat) operation have a lower priority than even the logical operations?! This would imply that it is desirable to concatenate boolean values (which is not possible)! The concat operation should be put at the same level as binary + and - so that bitwise operations, shift operations, and comparisons become possible on concatenated strings without having to put parantheses around them. - the 'not' operator should be lower prioritized than the comparison operators, because otherwise you have to put parentheses around all negated conditions except identifiers, though it should be quite clear what 'not a > b' means (at the moment it would mean '(not a) > b' which NEVER makes sense). - likewise, the 'not4b' operator should be prioritized lower than the & (concat) operator, so it is possible to invert concatenated strings (here it actually IS a design decision, but the 4b-operators should all be grouped together in my opinion, so as not to confuse the user). So, here is our proposal for the revised operator precedence table: priority | operator type | operator | if not binary | + + highest | | ( ... ) | unary | +, - | | *, /, mod | | +, -, & | unary | not4b | | and4b | | xor4b | | or4b | | <<, >>, <@, @> | | <, >, <=, >= | | ==, != | unary | not | | and | | xor lowest | | or Greetings, Jacob Wieland, TU-Berlin (UEBB - Compiler Construction) |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 19 Sep 2001 16:16 #5937
|
On Wed, 19 Sep 2001, Jacob 'Ugh' Wieland wrote:
> In the BNF: > > I would strongly suggest that the length restriction for > 'record of' and 'set of' types should have the same syntax > as that for string subtypes (where the length restriction > is placed BEHIND the defined type identifier instead of BEFORE > it). I would like to suggest changing the syntax in way that allows to define nested data types in a _single_ type definition. At the moment, you need two type definitions if you want to place a record inside a record. If I remember correctly, at the moment it is impossible to extend the current syntax in this direction without causing ambiguities. I suggest keeping this in mind when unifying/changing the syntax for length restrictions. > c) On another note, the operator precedence table (in the normal > standard) does not seem to make a whole lot of sense to us: > > - why does the & (concat) operation have a lower priority than > even the logical operations?! > This would imply that it is desirable to concatenate boolean > values (which is not possible)! I think the EBNF must be fixed as well. Michael -- ====================================================================== Michael Schmitt phone: +49 451 500 3725 Institute for Telematics secretary: +49 451 500 3721 Medical University of Luebeck fax: +49 451 500 3722 Ratzeburger Allee 160 eMail: This email address is being protected from spambots. You need JavaScript enabled to view it. D-23538 Luebeck, Germany WWW: www.itm.mu-luebeck.de ====================================================================== |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 19 Sep 2001 16:21 #5938
|
Hi Jacob,
I have some problems with your a) leth take an example: type record of integer R; type set of integer S; var R R1 R2; var S S1 := {1,2,3} R1 := S1; R2 := S1; How can you make sure, that R1 == R2 is always true? Regards, Csaba. Original Message From: Jacob 'Ugh' Wieland [This email address is being protected from spambots. You need JavaScript enabled to view it.] Sent: 19 September 2001 16:48 To: This email address is being protected from spambots. You need JavaScript enabled to view it. Subject: Re: Draft texts for TTCN-3 edition 2 Hi folks, Since there have been no answers to this mail, I send it again as it raises some issues which are very important in my view. Forwarded message Here are some comments from us folks at the TU-Berlin concerning the draft of edition 2 of the standard: First some errata: First of all, the 'changed keyword list' should be changed :-) Delete altcontroltype toalt afteralt nextdefault Add repeat In the Teststeps chapter in ZZ.1 in the second example, in the 'alt.AnotherTestStep()' the 'alt.' should be removed. Now to some more serious comments: In the BNF: I would strongly suggest that the length restriction for 'record of' and 'set of' types should have the same syntax as that for string subtypes (where the length restriction is placed BEHIND the defined type identifier instead of BEFORE it). All these types are some sort of 'list' type (which is why they share the length restriction) and thus should be used similarly so as not to confuse the user of the language. In Section 6.7: a) Why are set types and set of types only compatible with other set types? Personally, I would opt that at least 'record of' and 'set of' types should be compatible (if the length restrictions match) and that type compatibility should be transitive, i.e. if a 'record' type is compatible with a 'record of' type and that is again compatible with a 'set of' type, then the first should also be compatible with the last. For example, the types: type record T1 { integer f1, ..., integer fn } is compatible with type record of integer T2 length(n) This again (in my view) should be compatible to type set of integer T3 length(n) which again, of course is compatible to type set T4 { integer g1, ..., integer gn } This should not be TOO surprising for the user, since all of these types can take the same denotation, i.e. { V1, ..., Vn }. b) What does it mean that the compatibility rules for set types are 'identical to records except the fields in the structure can be in any order' when the 'identical order' of the types in record types is (in my view) the only criterium for record type compatibility? This would have the following implications (in my understanding): type set A { integer a, boolean b } and type set B { integer b, boolean a } are compatible (identical). This does not seem to pose a problem at first glance, BUT: type set C { integer a, integer b } and type set D { integer b, integer a } are - of course - compatible as well, so if I write: var C v1; var D v2 := { 1, 2 }; v1 := v2; If the order is irrelevant, then it is not clear which field is assigned to which, i.e either v1.a := v2.b and v1.b := v2.a or v1.a := v2.a and v1.b := v2.b. The latter would be consistent with the example for set type compatibility given which seems to imply that set types are compatible if the NAMES and their respective types are compatible (which would make a whole lot of sense). However, if the compatibility rules are the same as for record types, than the former assignment would be more consistent (although this would be inconsistent with the example given in the draft). In the latter case (assuming the example in the draft should not be inconsistent and the names as in record types be irrelevant, since it is not stated otherwise), type compatibility could statically only be decidable with an (in the worst case) exponential algorithm (i.e. all permutations of fields must be matched for compatibility): Assume that Ti is only compatible to Ti, then type set F { T1 f1, ... Tn fn } would be compatible to type set G { Tn gn, ... T1 g1 } but you would need faculty(n) (i.e. n * (n-1) * ... * 2) compatibility checks to find out that they are incompatible (this is especially indesirable if I have large set types which actually are similar, but still incompatible). (If it is possible to find an efficient algorithm to find the equivalence classes for types, then a more efficient algorithm for type compatibility can be found as well) Also, if the order is irrelevant, then the compiler has to decide arbitrarily which compatible field to assign to which (or this would have to be explicitely stated in the standard). From a language designer and compiler constructors point of view, I would suggest that the 'matching order' criterium for record types be substituted by the 'matching name' criterium for set types (i.e. for set types to be compatible, they must have the same number of fields, the same field names in any order and for each field name a compatible type for that field). Then, the compatibility of two set types can be decided in linear time. Also, if the list denotation (i.e. { V1, ..., Vn }) is used for the value of a 'set' type, the 'order' criterium should be used (i.e. Vi must be of the i-th field type of the set type) for assignment of the set fields, because otherwise, you have the same complexity/arbitrarity problem to decide which value should be assigned to which field as in the type compatibility check (which, in essence would have to be invoked for this decision). c) On another note, the operator precedence table (in the normal standard) does not seem to make a whole lot of sense to us: - why do bitwise operations on strings have a lower priority than do relational operations? This would mean that if I want to compare two results of a bitwise string operation, I would have to always put () around these, although nothing else makes sense: s1 or4b s2 != s3 and4b s4 makes only as (s1 or4b s2) != (s3 and4b s4) sense (while it means s1 or4b ((s2 != s3) and4b s4) in the standard) I think, the bitwise operations should have a higher priority then the shift/rotate operations (with which they do not clash, because they take a string and an integer operator). Then, it would be feasible to write something like: s1 or4b s2 >> 3 == s3 and4b s4 <@ 5 which would mean (s1 or4b s2) >> 3 == (s3 and4b s4) <@ 5 instead of s1 or4b ((s2 >> 3) == s3) and4b (s4 <@ 5) - why does the & (concat) operation have a lower priority than even the logical operations?! This would imply that it is desirable to concatenate boolean values (which is not possible)! The concat operation should be put at the same level as binary + and - so that bitwise operations, shift operations, and comparisons become possible on concatenated strings without having to put parantheses around them. - the 'not' operator should be lower prioritized than the comparison operators, because otherwise you have to put parentheses around all negated conditions except identifiers, though it should be quite clear what 'not a > b' means (at the moment it would mean '(not a) > b' which NEVER makes sense). - likewise, the 'not4b' operator should be prioritized lower than the & (concat) operator, so it is possible to invert concatenated strings (here it actually IS a design decision, but the 4b-operators should all be grouped together in my opinion, so as not to confuse the user). So, here is our proposal for the revised operator precedence table: priority | operator type | operator | if not binary | + + highest | | ( ... ) | unary | +, - | | *, /, mod | | +, -, & | unary | not4b | | and4b | | xor4b | | or4b | | <<, >>, <@, @> | | <, >, <=, >= | | ==, != | unary | not | | and | | xor lowest | | or Greetings, Jacob Wieland, TU-Berlin (UEBB - Compiler Construction) |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 19 Sep 2001 16:54 #5939
|
On Wed, 19 Sep 2001, Csaba Koppany wrote:
> Hi Jacob, > > I have some problems with your a) > > leth take an example: > > type record of integer R; > type set of integer S; > > var R R1 R2; > var S S1 := {1,2,3} > > R1 := S1; > R2 := S1; > > How can you make sure, that R1 == R2 is always true? I can't and I don't have to. That is not implied by the above definitions. Since in a set the elements are unordered, assigning them to a value of record type, this puts the elements of the set in an arbitrary order into the record (although in most implementations, it will not change the order in the set, so the records actually WILL be the same - that is an unspecified bonus which only hackers may use). Type compatibility only says that one thing can be used as another thing. Now, why should I not be able to use an 'unordered' list of integers as an 'ordered' list of integers (and vice versa)? Jacob |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 20 Sep 2001 06:00 #5940
|
Hi Jacob,
if you don't want to, it is another thing. But then, I'm just wondering what can it be used for, and what is the benefit. I'm not sure, that it won't cause more problem than it helps. That's all. Regards, Csaba. Original Message From: Jacob 'Ugh' Wieland [This email address is being protected from spambots. You need JavaScript enabled to view it.] Sent: 19 September 2001 18:54 To: This email address is being protected from spambots. You need JavaScript enabled to view it. Subject: Re: Draft texts for TTCN-3 edition 2 On Wed, 19 Sep 2001, Csaba Koppany wrote: > Hi Jacob, > > I have some problems with your a) > > leth take an example: > > type record of integer R; > type set of integer S; > > var R R1 R2; > var S S1 := {1,2,3} > > R1 := S1; > R2 := S1; > > How can you make sure, that R1 == R2 is always true? I can't and I don't have to. That is not implied by the above definitions. Since in a set the elements are unordered, assigning them to a value of record type, this puts the elements of the set in an arbitrary order into the record (although in most implementations, it will not change the order in the set, so the records actually WILL be the same - that is an unspecified bonus which only hackers may use). Type compatibility only says that one thing can be used as another thing. Now, why should I not be able to use an 'unordered' list of integers as an 'ordered' list of integers (and vice versa)? Jacob |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 20 Sep 2001 07:57 #5941
|
Hi all,
Regarding item a). No, of course not. Record of and set of has not the same semantical meaning as the order in a record of has significance. In the example below, if T2 is awaited at receipt with values { 1, 2, 3 }, and { 2, 1, 3 } is received, it shall not match! While if T3 is awaited, than integers with values 1, 2 and 3 can be in any order, it shall match. Also, following your proposal, if T3 := { 1, 2, 3 }; T4 := { 1, 3, 2 }; // T4 is equal to T3 as order is not significant for set & set of T1:= T3; //according to your proposal it should be valid T2 := T4; //according to your proposal it should be also valid at this point T3 == T4 evaluates to true and according to your proposal T1 == T2 should also be evaluated to true. But it will not, because T1 and T2 are different and T1 == T2 evaluates to false! Best Regards, György ============================================ dr. György RÉTHY Ericsson Communications Systems Hungary Lim. Conformance Center tel.: +36 1 437-7006; fax: +36 1 437-7767 mobile: +36 30 297-7862 e-mail: This email address is being protected from spambots. You need JavaScript enabled to view it. web: www.r.eth.ericsson.se/~ethgry ============================================ > Original Message >From: Jacob 'Ugh' Wieland [This email address is being protected from spambots. You need JavaScript enabled to view it.] >Sent: Wednesday, September 19, 2001 4:48 PM >To: This email address is being protected from spambots. You need JavaScript enabled to view it. >Subject: Re: Draft texts for TTCN-3 edition 2 > > >Hi folks, > >Since there have been no answers to this mail, >I send it again as it raises some issues which >are very important in my view. > > Forwarded message > >Here are some comments from us folks at the TU-Berlin >concerning the draft of edition 2 of the standard: > >First some errata: > >First of all, the 'changed keyword list' should be changed :-) > > Delete > altcontroltype > toalt > afteralt > nextdefault > Add > repeat > >In the Teststeps chapter in ZZ.1 in the second example, >in the 'alt.AnotherTestStep()' the 'alt.' should be removed. > >Now to some more serious comments: > >In the BNF: > >I would strongly suggest that the length restriction for >'record of' and 'set of' types should have the same syntax >as that for string subtypes (where the length restriction >is placed BEHIND the defined type identifier instead of BEFORE >it). > >All these types are some sort of 'list' type (which is why they >share the length restriction) and thus should be used similarly >so as not to confuse the user of the language. > >In Section 6.7: > >a) Why are set types and set of types only compatible with other > set types? > > Personally, I would opt that at least 'record of' and 'set of' > types should be compatible (if the length restrictions match) > and that type compatibility should be transitive, i.e. > if a 'record' type is compatible with a 'record of' type > and that is again compatible with a 'set of' type, then > the first should also be compatible with the last. > > For example, the types: > > type record T1 { > integer f1, > ..., > integer fn > } > > is compatible with > > type record of integer T2 length(n) > > This again (in my view) should be compatible to > > type set of integer T3 length(n) > > which again, of course is compatible to > > type set T4 { > integer g1, > ..., > integer gn > } > > This should not be TOO surprising for the user, since all of > these types can take the same denotation, i.e. { V1, ..., Vn }. > >b) What does it mean that the compatibility rules for set > types are 'identical to records except the fields in the > structure can be in any order' when the 'identical order' > of the types in record types is (in my view) the only > criterium for record type compatibility? > > This would have the following implications (in my understanding): > > type set A { > integer a, > boolean b > } > > and > > type set B { > integer b, > boolean a > } > > are compatible (identical). > > This does not seem to pose a problem at first glance, BUT: > > type set C { > integer a, > integer b > } > > and > > type set D { > integer b, > integer a > } > > are - of course - compatible as well, > > so if I write: > > var C v1; > var D v2 := { 1, 2 }; > v1 := v2; > > If the order is irrelevant, then it is not clear which field is > assigned to which, i.e > either > v1.a := v2.b and v1.b := v2.a > or > v1.a := v2.a and v1.b := v2.b. > > The latter would be consistent with the example for set type > compatibility given which seems to imply that set types > are compatible if the NAMES and their respective types are > compatible (which would make a whole lot of sense). > > However, if the compatibility rules are the same as for > record types, than the former assignment would be more > consistent (although this would be inconsistent with the > example given in the draft). > > In the latter case (assuming the example in the draft > should not be inconsistent and the names as in record > types be irrelevant, since it is not stated otherwise), > type compatibility could statically only be decidable > with an (in the worst case) exponential algorithm > (i.e. all permutations of fields must be matched > for compatibility): > > Assume that Ti is only compatible to Ti, then > > type set F { > T1 f1, > ... > Tn fn > } > > would be compatible to > > type set G { > Tn gn, > ... > T1 g1 > } > > but you would need faculty(n) (i.e. n * (n-1) * ... * 2) > compatibility checks to find out that they are incompatible > (this is especially indesirable if I have large set types > which actually are similar, but still incompatible). > > (If it is possible to find an efficient algorithm to > find the equivalence classes for types, then a more efficient > algorithm for type compatibility can be found as well) > > Also, if the order is irrelevant, then the compiler has to > decide arbitrarily which compatible field to assign to which > (or this would have to be explicitely stated in the standard). > > From a language designer and compiler constructors point of view, > I would suggest that the 'matching order' criterium for record > types be substituted by the 'matching name' criterium for set > types (i.e. for set types to be compatible, they must have the > same number of fields, the same field names in any order and > for each field name a compatible type for that field). > Then, the compatibility of two set types can be decided in >linear time. > > Also, if the list denotation (i.e. { V1, ..., Vn }) is used > for the value of a 'set' type, the 'order' criterium should > be used (i.e. Vi must be of the i-th field type of the set type) > for assignment of the set fields, because otherwise, > you have the same complexity/arbitrarity problem to decide > which value should be assigned to which field as in the > type compatibility check (which, in essence would have to > be invoked for this decision). > >c) On another note, the operator precedence table (in the normal > standard) does not seem to make a whole lot of sense to us: > > - why do bitwise operations on strings have a lower priority > than do relational operations? > > This would mean that if I want to compare two > results of a bitwise string operation, I would have to > always put () around these, although nothing else makes sense: > > s1 or4b s2 != s3 and4b s4 makes only as (s1 or4b s2) != >(s3 and4b s4) > sense (while it means s1 or4b ((s2 != s3) and4b s4) in >the standard) > > I think, the bitwise operations should have a higher priority > then the shift/rotate operations (with which they do not clash, > because they take a string and an integer operator). > > Then, it would be feasible to write something like: > > s1 or4b s2 >> 3 == s3 and4b s4 <@ 5 > > which would mean > > (s1 or4b s2) >> 3 == (s3 and4b s4) <@ 5 > > instead of > > s1 or4b ((s2 >> 3) == s3) and4b (s4 <@ 5) > > - why does the & (concat) operation have a lower priority than > even the logical operations?! > This would imply that it is desirable to concatenate boolean > values (which is not possible)! > > The concat operation should be put at the same level as > binary + and - so that bitwise operations, shift operations, > and comparisons become possible on concatenated strings > without having to put parantheses around them. > > - the 'not' operator should be lower > prioritized than the comparison operators, because > otherwise you have to put parentheses around all > negated conditions except identifiers, though it > should be quite clear what 'not a > b' means > (at the moment it would mean '(not a) > b' which > NEVER makes sense). > > - likewise, the 'not4b' operator should be prioritized > lower than the & (concat) operator, so it is possible > to invert concatenated strings (here it actually IS > a design decision, but the 4b-operators should all > be grouped together in my opinion, so as not to > confuse the user). > > So, here is our proposal for the revised operator precedence table: > > priority | operator type | operator > | if not binary | > + + > highest | | ( ... ) > | unary | +, - > | | *, /, mod > | | +, -, & > | unary | not4b > | | and4b > | | xor4b > | | or4b > | | <<, >>, <@, @> > | | <, >, <=, >= > | | ==, != > | unary | not > | | and > | | xor > lowest | | or > >Greetings, > >Jacob Wieland, TU-Berlin (UEBB - Compiler Construction) > |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 20 Sep 2001 08:20 #5942
|
On Thu, 20 Sep 2001, Csaba Koppany wrote:
> Hi Jacob, > > if you don't want to, it is another thing. I didn't talk about 'wanting to'. > But then, I'm just wondering what can it be used for, > and what is the benefit. I'm not sure, that it won't > cause more problem than it helps. That's all. Type compatibility is a tool for generalization and as such reducing work, time and errors. It provides the possibility to 're-use' types that have been defined for one purpose for another purpose. For example, if you have any algorithm which works the same way for all entities which are basically a 'list of T' (as is true for both 'record of T' and 'set of T'), like combining the elements or doing an operation for each element, this should not have to be programmed twice, if the structure or order in which the elements are placed is irrelevant. Otherwise, if I want to use an algorithm that works on 'record of T' and I have a 'set of T' on whose elements the algorithm would work just as fine, I first would have to copy the one into the other by hand to use the algorithm on the set. This can be avoided by a proper type compatibility. Jacob |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 20 Sep 2001 09:12 #5943
|
On Thu, 20 Sep 2001, Jacob 'Ugh' Wieland wrote:
> Otherwise, if I want to use an algorithm that works > on 'record of T' and I have a 'set of T' on whose > elements the algorithm would work just as fine, > I first would have to copy the one into the other > by hand to use the algorithm on the set. > This can be avoided by a proper type compatibility. IMHO, a "record of" carries more information than a "set of". Thus such an algorithm should work on 'set of T'. I think type compatibility does not mean necessarily that some A can be converted into B _and_ vice versa. Michael -- ====================================================================== Michael Schmitt phone: +49 451 500 3725 Institute for Telematics secretary: +49 451 500 3721 Medical University of Luebeck fax: +49 451 500 3722 Ratzeburger Allee 160 eMail: This email address is being protected from spambots. You need JavaScript enabled to view it. D-23538 Luebeck, Germany WWW: www.itm.mu-luebeck.de ====================================================================== |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 20 Sep 2001 09:15 #5944
|
The last thing Jacob said is not quite right, in that allowing wider type
compatibility is not the only way to solve the problem of writing generic code. For example, Ada has very strict type rules, but still allows generic algorithms to be written via the use of "generic", "private" and "limited" facilities in the definitions of packages and procedures. Basically it is a method that works and IMHO :-) works quite well. However, we are a long way into the definition process, so much as I prefer the Ada way, it is probably now best for us to use less restrictive type rules to solve generic programming problems. However, each case should be judged on its merits. Widening type compatibility to allow a few generic functions to written, at the expense of compromising the safety of the rest of the uses the language is put to, is a bad idea (so if just one or two programmers are going to have to do a bit of extra work, tough). Doing it for cases where lots of generic code may be expected to be written is however probably worth the risk as lots of programmers doing extra work is definitely a bad idea. Regards Derek Derek C Lazenby Anite 127 Fleet Road Fleet Hampshire GU51 3QN Tel : +44 1252 775200 Fax: +44 1252 775299 Anite Telecoms Limited Registered in England No. 1721900 Registered Office: 100 Longwater Avenue, GreenPark, Reading, Berkshire RG2 6GP, United Kingdom Original Message From: Jacob 'Ugh' Wieland [This email address is being protected from spambots. You need JavaScript enabled to view it.] Sent: 20 September 2001 09:20 To: This email address is being protected from spambots. You need JavaScript enabled to view it. Subject: Re: Draft texts for TTCN-3 edition 2 On Thu, 20 Sep 2001, Csaba Koppany wrote: > Hi Jacob, > > if you don't want to, it is another thing. I didn't talk about 'wanting to'. > But then, I'm just wondering what can it be used for, > and what is the benefit. I'm not sure, that it won't > cause more problem than it helps. That's all. Type compatibility is a tool for generalization and as such reducing work, time and errors. It provides the possibility to 're-use' types that have been defined for one purpose for another purpose. For example, if you have any algorithm which works the same way for all entities which are basically a 'list of T' (as is true for both 'record of T' and 'set of T'), like combining the elements or doing an operation for each element, this should not have to be programmed twice, if the structure or order in which the elements are placed is irrelevant. Otherwise, if I want to use an algorithm that works on 'record of T' and I have a 'set of T' on whose elements the algorithm would work just as fine, I first would have to copy the one into the other by hand to use the algorithm on the set. This can be avoided by a proper type compatibility. Jacob |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 20 Sep 2001 09:16 #5946
|
On Thu, 20 Sep 2001, Gyorgy Rethy (ETH) wrote:
> Hi all, > > Regarding item a). No, of course not. > Record of and set of has not the same semantical meaning as the > order in a record of has significance. In my understanding, type compatibility doesn't imply 'has the same meaning', but 'can also be used as'. (Otherwise, compatibility between 'record' and 'record of' types wouldn't make sense either, as they allow totally different operations and thus can not have exactly the same semantics). > In the example below, if T2 is awaited at receipt with values { 1, 2, 3 }, > and { 2, 1, 3 } is received, it shall not match! Who said it would? Type compatibility between T and T' means there is a well-defined coercion between both types. (For assignments of sets to records or vice versa this could be defined by: there is a bijection between the two structures) > While if T3 is awaited, than integers with values 1, 2 > and 3 can be in any order, it shall match. Also true, but irrelevant to my proposal. > Also, following your proposal, if > > T3 := { 1, 2, 3 }; > T4 := { 1, 3, 2 }; // T4 is equal to T3 as order is not significant for set & set of > T1:= T3; //according to your proposal it should be valid > T2 := T4; //according to your proposal it should be also valid > > at this point T3 == T4 evaluates to true and > according to your proposal T1 == T2 should also be evaluated to true. No, you misunderstood me there. The assignment of T3 and T4 to T1 and T2 respectively adds the 'order' criterium to the values T3 and T4. When values of record type are compared, the order _IS_ relevant, so although T3 == T4, T1 is not necessarily equal to T2. > But it will not, because T1 and T2 are different and > T1 == T2 evaluates to false! That was not implied by my proposal. T1 could be equal to T2, but it does not have to be (in my opinion). If it is, that is pure accident (as the ordering of T3 and T4 put their elements in the same order). Most interesting is probably the equality between an entity of set and one of record type, e.g. is T1 == T4? From the point of view of T4, this would always be true (by interpreting T1 as a set), but from the point of view of T1 it could be false (because it would interprete T4 as a record). Thus, I would say they are not equal, as equality should be true from both points of view. My basic point is. In every context, every expression has a type which is a 'view' on the expression's actual value, allowing specific operations on it. For record types, for example, only the '.' operation is allowed, while for 'record of' types, the '[index]' operation is allowed. They are STILL structurally compatible if they have the same length (and the same content types). If the 'view' for a 'record' is changed to 'record of' by an assignment, the operations that can be performed on the reference which it has been assigned to are different than those that can be performed on the original. The same can work just as well with sets and records. When assigning a set to a record, I add the 'order' restriction to the 'view' on the set to get the 'view' on the record. Likewise, when assigning a record to a set, I forget the 'order' of the record. It would not even be a problem, if, by adding or forgetting the 'order' restriction of record types by assigning them from or to set types, the order of the elements is changed, as such an assignment either 'orders' something 'unordered' arbitrarily or 'unorders' something 'ordered' arbitrarily. I know this sounds all very confusing, but for me, so does the 'restricted' type compatibility, as I see no reason for it. Jacob PS: The section about sets in the TTCN-3 standard should be revised as it is not clear what an 'unordered' array is in that context. Normally, an 'ordered' array implies that the contained VALUES are ordered (i.e for all indices i, j, if i <= j, then r <= r[j]), although this is clearly not meant for the 'ordered' predicate of record types. |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 20 Sep 2001 09:29 #5947
|
Hi,
here my three pennies. > Original Message > IMHO, a "record of" carries more information than a "set of". correct. > Thus such an > algorithm should work on 'set of T'. I think type Well I think the other way round. Everything that works on a 'set of T' should also work on a 'record of'. At least it should work in a predictable way. For example a function that operates only on the single elements of a structure should work with both. A function that relies on the relation between single elements works only with 'record of'. Best regards, Theo Original Message From: "Michael Schmitt" <This email address is being protected from spambots. You need JavaScript enabled to view it.> To: <This email address is being protected from spambots. You need JavaScript enabled to view it.> Sent: Thursday, September 20, 2001 11:12 AM Subject: Re: Draft texts for TTCN-3 edition 2 On Thu, 20 Sep 2001, Jacob 'Ugh' Wieland wrote: > Otherwise, if I want to use an algorithm that works > on 'record of T' and I have a 'set of T' on whose > elements the algorithm would work just as fine, > I first would have to copy the one into the other > by hand to use the algorithm on the set. > This can be avoided by a proper type compatibility. IMHO, a "record of" carries more information than a "set of". Thus such an algorithm should work on 'set of T'. I think type compatibility does not mean necessarily that some A can be converted into B _and_ vice versa. Michael -- ====================================================================== Michael Schmitt phone: +49 451 500 3725 Institute for Telematics secretary: +49 451 500 3721 Medical University of Luebeck fax: +49 451 500 3722 Ratzeburger Allee 160 eMail: This email address is being protected from spambots. You need JavaScript enabled to view it. D-23538 Luebeck, Germany WWW: www.itm.mu-luebeck.de ====================================================================== |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 20 Sep 2001 09:51 #5948
|
On Thu, 20 Sep 2001, Theofanis Vassiliou-Gioles wrote:
> > Thus such an > > algorithm should work on 'set of T'. I think type > > Well I think the other way round. Everything that works on a 'set of T' > should > also work on a 'record of'. At least it should work in a predictable way. I am sure we have the same opinion. I was thinking of the formal parameter of a function that implements such an algorithm. And this parameter should be of type "set of". Michael -- ====================================================================== Michael Schmitt phone: +49 451 500 3725 Institute for Telematics secretary: +49 451 500 3721 Medical University of Luebeck fax: +49 451 500 3722 Ratzeburger Allee 160 eMail: This email address is being protected from spambots. You need JavaScript enabled to view it. D-23538 Luebeck, Germany WWW: www.itm.mu-luebeck.de ====================================================================== |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 20 Sep 2001 09:51 #5949
|
Hi Derek,
well I totally agree with your comments. Basically I was just considering it from a "technical" point of view. With respect to the standardization process I support that "nice-to-haves" shouldn't be considered at least for the upcoming revision. Instead we should concentrate on the essential *defects*. However, I favour the idea to collect the valuable input produced on this list, to be maybe considered in a longer term. But then only after thoroughly consideration. With best regards, Theo > Original Message > From: Active_TTCN3 : MTS STF133 TTCN Version 3 - Active Members Only > [This email address is being protected from spambots. You need JavaScript enabled to view it.]On Behalf Of Lazenby, Derek > Sent: 20 September 2001 11:15 > To: This email address is being protected from spambots. You need JavaScript enabled to view it. > Subject: Re: Draft texts for TTCN-3 edition 2 > > > The last thing Jacob said is not quite right, in that > allowing wider type > compatibility is not the only way to solve the problem of > writing generic > code. For example, Ada has very strict type rules, but still > allows generic > algorithms to be written via the use of "generic", "private" > and "limited" > facilities in the definitions of packages and procedures. > Basically it is a > method that works and IMHO :-) works quite well. > > However, we are a long way into the definition process, so > much as I prefer > the Ada way, it is probably now best for us to use less > restrictive type > rules to solve generic programming problems. > > However, each case should be judged on its merits. Widening type > compatibility to allow a few generic functions to written, at > the expense of > compromising the safety of the rest of the uses the language > is put to, is a > bad idea (so if just one or two programmers are going to have > to do a bit of > extra work, tough). Doing it for cases where lots of generic > code may be > expected to be written is however probably worth the risk as lots of > programmers doing extra work is definitely a bad idea. > > Regards > > Derek > > > Derek C Lazenby > Anite > 127 Fleet Road > Fleet > Hampshire > GU51 3QN > Tel : +44 1252 775200 > Fax: +44 1252 775299 > > Anite Telecoms Limited Registered in England No. 1721900 Registered > Office: 100 Longwater Avenue, GreenPark, Reading, Berkshire RG2 6GP, > United Kingdom > > > Original Message > From: Jacob 'Ugh' Wieland [This email address is being protected from spambots. You need JavaScript enabled to view it.] > Sent: 20 September 2001 09:20 > To: This email address is being protected from spambots. You need JavaScript enabled to view it. > Subject: Re: Draft texts for TTCN-3 edition 2 > > > On Thu, 20 Sep 2001, Csaba Koppany wrote: > > > Hi Jacob, > > > > if you don't want to, it is another thing. > > I didn't talk about 'wanting to'. > > > But then, I'm just wondering what can it be used for, > > and what is the benefit. I'm not sure, that it won't > > cause more problem than it helps. That's all. > > Type compatibility is a tool for generalization and > as such reducing work, time and errors. It provides > the possibility to 're-use' types that have been defined > for one purpose for another purpose. > > For example, if you have any algorithm which works > the same way for all entities which are basically a > 'list of T' (as is true for both 'record of T' and > 'set of T'), like combining the elements or doing > an operation for each element, this should not have > to be programmed twice, if the structure or order > in which the elements are placed is irrelevant. > > Otherwise, if I want to use an algorithm that works > on 'record of T' and I have a 'set of T' on whose > elements the algorithm would work just as fine, > I first would have to copy the one into the other > by hand to use the algorithm on the set. > This can be avoided by a proper type compatibility. > > Jacob > |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 20 Sep 2001 11:03 #5950
|
Anybody knows how unsubscribe from the automatic mailing list?
Zelig Derchanski This email address is being protected from spambots. You need JavaScript enabled to view it. Original Message From: Active_TTCN3 : MTS STF133 TTCN Version 3 - Active Members Only [This email address is being protected from spambots. You need JavaScript enabled to view it.]On Behalf Of Jacob 'Ugh' Wieland Sent: Thursday, September 20, 2001 10:20 AM To: This email address is being protected from spambots. You need JavaScript enabled to view it. Subject: Re: Draft texts for TTCN-3 edition 2 On Thu, 20 Sep 2001, Csaba Koppany wrote: > Hi Jacob, > > if you don't want to, it is another thing. I didn't talk about 'wanting to'. > But then, I'm just wondering what can it be used for, > and what is the benefit. I'm not sure, that it won't > cause more problem than it helps. That's all. Type compatibility is a tool for generalization and as such reducing work, time and errors. It provides the possibility to 're-use' types that have been defined for one purpose for another purpose. For example, if you have any algorithm which works the same way for all entities which are basically a 'list of T' (as is true for both 'record of T' and 'set of T'), like combining the elements or doing an operation for each element, this should not have to be programmed twice, if the structure or order in which the elements are placed is irrelevant. Otherwise, if I want to use an algorithm that works on 'record of T' and I have a 'set of T' on whose elements the algorithm would work just as fine, I first would have to copy the one into the other by hand to use the algorithm on the set. This can be avoided by a proper type compatibility. Jacob |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 20 Sep 2001 11:03 #5951
|
Send a message to:
This email address is being protected from spambots. You need JavaScript enabled to view it. with the message body: UNSUBSCRIBE TTCN3 > Original Message > From: zelig_derchanski [This email address is being protected from spambots. You need JavaScript enabled to view it.] > Sent: 20 September 2001 13:04 > To: This email address is being protected from spambots. You need JavaScript enabled to view it. > Subject: Re: Draft texts for TTCN-3 edition 2 > > > Anybody knows how unsubscribe from the automatic mailing list? > > Zelig Derchanski > This email address is being protected from spambots. You need JavaScript enabled to view it. > > Original Message > From: Active_TTCN3 : MTS STF133 TTCN Version 3 - Active Members Only > [This email address is being protected from spambots. You need JavaScript enabled to view it.]On Behalf Of Jacob 'Ugh' Wieland > Sent: Thursday, September 20, 2001 10:20 AM > To: This email address is being protected from spambots. You need JavaScript enabled to view it. > Subject: Re: Draft texts for TTCN-3 edition 2 > > > On Thu, 20 Sep 2001, Csaba Koppany wrote: > > > Hi Jacob, > > > > if you don't want to, it is another thing. > > I didn't talk about 'wanting to'. > > > But then, I'm just wondering what can it be used for, > > and what is the benefit. I'm not sure, that it won't > > cause more problem than it helps. That's all. > > Type compatibility is a tool for generalization and > as such reducing work, time and errors. It provides > the possibility to 're-use' types that have been defined > for one purpose for another purpose. > > For example, if you have any algorithm which works > the same way for all entities which are basically a > 'list of T' (as is true for both 'record of T' and > 'set of T'), like combining the elements or doing > an operation for each element, this should not have > to be programmed twice, if the structure or order > in which the elements are placed is irrelevant. > > Otherwise, if I want to use an algorithm that works > on 'record of T' and I have a 'set of T' on whose > elements the algorithm would work just as fine, > I first would have to copy the one into the other > by hand to use the algorithm on the set. > This can be avoided by a proper type compatibility. > > Jacob > |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 20 Sep 2001 12:42 #5953
|
Hi all,
Yes, really, it was not clear from the first mail, that the proposal is one way only (i.e. that a record/record of could be compatible with set/set of but not vica versa). But I do not agree "ordering" something "unordered". This would mean to add extra information, which in fact does not exists. An example: you have a record of integer M1 which should containe let me say measurement results on different channels: first value is the result for the first channel, second value for the second channel etc. You can not assign values from a set of integer M2 contaning measurement results on the same channels to M1 because it would add a meaning to the single values which they do not have. You said, that the TS writer should know, that elements of M1 has a semantic meaning before assigning M2 only, but I think this could lead to more problems than we would solve by this type compatibility change. Personally me PREFER THEOS PROPOSAL and put aside nice ideas for a while (what means someone should take care about a "living list"). Let concentrate on real defects/inconsistencies in the standard on the short term and consider new proposals on long term as they need more thoroughful consideration anyway. Best Regards, György ============================================ dr. György RÉTHY Ericsson Communications Systems Hungary Lim. Conformance Center tel.: +36 1 437-7006; fax: +36 1 437-7767 mobile: +36 30 297-7862 e-mail: This email address is being protected from spambots. You need JavaScript enabled to view it. web: www.r.eth.ericsson.se/~ethgry ============================================ > Original Message >From: Jacob 'Ugh' Wieland [This email address is being protected from spambots. You need JavaScript enabled to view it.] >Sent: Thursday, September 20, 2001 11:16 AM >To: This email address is being protected from spambots. You need JavaScript enabled to view it. >Subject: Re: Draft texts for TTCN-3 edition 2 > > >On Thu, 20 Sep 2001, Gyorgy Rethy (ETH) wrote: > >> Hi all, >> >> Regarding item a). No, of course not. >> Record of and set of has not the same semantical meaning as the >> order in a record of has significance. > >In my understanding, type compatibility doesn't imply >'has the same meaning', but 'can also be used as'. >(Otherwise, compatibility between 'record' and 'record of' >types wouldn't make sense either, as they allow totally different >operations and thus can not have exactly the same semantics). > >> In the example below, if T2 is awaited at receipt with >values { 1, 2, 3 }, >> and { 2, 1, 3 } is received, it shall not match! > >Who said it would? Type compatibility between T and T' means >there is a well-defined coercion between both types. >(For assignments of sets to records or vice versa this could > be defined by: there is a bijection between the two > structures) > >> While if T3 is awaited, than integers with values 1, 2 >> and 3 can be in any order, it shall match. > >Also true, but irrelevant to my proposal. > >> Also, following your proposal, if >> >> T3 := { 1, 2, 3 }; >> T4 := { 1, 3, 2 }; // T4 is equal to T3 as order is not >significant for set & set of >> T1:= T3; //according to your proposal it should be valid >> T2 := T4; //according to your proposal it should be also valid >> >> at this point T3 == T4 evaluates to true and >> according to your proposal T1 == T2 should also be evaluated to true. > >No, you misunderstood me there. >The assignment of T3 and T4 to T1 and T2 respectively adds the 'order' >criterium to the values T3 and T4. When values of record type are >compared, the order _IS_ relevant, so although T3 == T4, T1 is >not necessarily equal to T2. > >> But it will not, because T1 and T2 are different and >> T1 == T2 evaluates to false! > >That was not implied by my proposal. T1 could be equal to T2, but >it does not have to be (in my opinion). If it is, that is pure >accident (as the ordering of T3 and T4 put their elements in >the same order). > >Most interesting is probably the equality between an entity of set >and one of record type, e.g. is T1 == T4? > >From the point of view of T4, this would always be true (by >interpreting T1 as a set), but from the point of view of T1 >it could be false (because it would interprete T4 as a record). >Thus, I would say they are not equal, as equality should be true >from both points of view. > >My basic point is. In every context, every expression has a >type which is a 'view' on the expression's actual value, >allowing specific operations on it. > >For record types, for example, only the '.' operation is allowed, >while for 'record of' types, the '[index]' operation is allowed. >They are STILL structurally compatible if they have the same length >(and the same content types). If the 'view' for a 'record' is >changed to 'record of' by an assignment, the operations that can >be performed on the reference which it has been assigned to are >different than those that can be performed on the original. > >The same can work just as well with sets and records. >When assigning a set to a record, I add the 'order' >restriction to the 'view' on the set to get the 'view' >on the record. Likewise, when assigning a record to a set, >I forget the 'order' of the record. > >It would not even be a problem, if, by adding or forgetting >the 'order' restriction of record types by assigning them >from or to set types, the order of the elements is changed, >as such an assignment either 'orders' something 'unordered' >arbitrarily or 'unorders' something 'ordered' arbitrarily. > >I know this sounds all very confusing, but for me, so does >the 'restricted' type compatibility, as I see no reason for >it. > >Jacob > >PS: The section about sets in the TTCN-3 standard should >be revised as it is not clear what an 'unordered' array is >in that context. Normally, an 'ordered' array implies that >the contained VALUES are ordered (i.e for all indices i, j, >if i <= j, then r <= r[j]), although this is clearly not >meant for the 'ordered' predicate of record types. > |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 20 Sep 2001 14:45 #5955
|
On Thu, 20 Sep 2001, Gyorgy Rethy (ETH) wrote:
> Hi all, > > Yes, really, it was not clear from the first mail, that the proposal > is one way only (i.e. that a record/record of could be compatible > with set/set of but not vica versa). It isn't from my perspective, if I want to interprete a set as a record, this poses no problem to me, they are both lists of elements. > But I do not agree "ordering" something "unordered". > This would mean to add extra information, which in fact does > not exists. The same applies on assigning an integer to a variable with an integer range subtype. This is allowed. > An example: you have a record of integer M1 which should containe > let me say measurement results on different channels: first value > is the result for the first channel, second value for the second > channel etc. You can not assign values from a set of integer M2 > contaning measurement results on the same channels to M1 because it > would add a meaning to the single values which they do not have. Maybe it doesn't make sense in that special case, but I can think of any amount of cases where it could make perfect sense. Also, the same argument applies for record and record of types. What if the fields in the record type have specific meaning and the elements in the record of type don't. Then the standard still allows me to assign the record of to the record. Every language allows the writer to write nonsense, but it shouldn't prevent him from writing sense. > You said, that the TS writer should know, that elements of M1 > has a semantic meaning before assigning M2 only, but I think > this could lead to more problems than we would solve by this > type compatibility change. The TS writer always has to know what they are doing. > Personally me PREFER THEOS PROPOSAL and put aside nice ideas > for a while (what means someone should take care about a > "living list"). Let concentrate on real defects/inconsistencies > in the standard on the short term and consider new proposals > on long term as they need more thoroughful consideration anyway. On that note, I think that the whole section about record and set types has to be revised, explaining what an 'unordered' and 'ordered' structure type should signify, as the intuitive meaning of these do not seem to be apply. Also, it is not clear why port and component types cannot be generic. Jacob |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 20 Sep 2001 15:50 #5956
|
Hi,
I fully agree that the definitions for record of and set of should be revised to be more precise. Best Regards, György ============================================ dr. György RÉTHY Ericsson Communications Systems Hungary Lim. Conformance Center tel.: +36 1 437-7006; fax: +36 1 437-7767 mobile: +36 30 297-7862 e-mail: This email address is being protected from spambots. You need JavaScript enabled to view it. web: www.r.eth.ericsson.se/~ethgry ============================================ > Original Message >From: Jacob 'Ugh' Wieland [This email address is being protected from spambots. You need JavaScript enabled to view it.] >Sent: Thursday, September 20, 2001 4:45 PM >To: This email address is being protected from spambots. You need JavaScript enabled to view it. >Subject: Re: Draft texts for TTCN-3 edition 2 > > >On Thu, 20 Sep 2001, Gyorgy Rethy (ETH) wrote: > >> Hi all, >> >> Yes, really, it was not clear from the first mail, that the proposal >> is one way only (i.e. that a record/record of could be compatible >> with set/set of but not vica versa). > >It isn't from my perspective, if I want to interprete a set as >a record, this poses no problem to me, they are both lists of >elements. > >> But I do not agree "ordering" something "unordered". >> This would mean to add extra information, which in fact does >> not exists. > >The same applies on assigning an integer to a variable with an >integer range subtype. This is allowed. > >> An example: you have a record of integer M1 which should containe >> let me say measurement results on different channels: first value >> is the result for the first channel, second value for the second >> channel etc. You can not assign values from a set of integer M2 >> contaning measurement results on the same channels to M1 because it >> would add a meaning to the single values which they do not have. > >Maybe it doesn't make sense in that special case, but I can think >of any amount of cases where it could make perfect sense. > >Also, the same argument applies for record and record of types. >What if the fields in the record type have specific meaning and >the elements in the record of type don't. Then the standard still >allows me to assign the record of to the record. > >Every language allows the writer to write nonsense, but it >shouldn't prevent him from writing sense. > >> You said, that the TS writer should know, that elements of M1 >> has a semantic meaning before assigning M2 only, but I think >> this could lead to more problems than we would solve by this >> type compatibility change. > >The TS writer always has to know what they are doing. > >> Personally me PREFER THEOS PROPOSAL and put aside nice ideas >> for a while (what means someone should take care about a >> "living list"). Let concentrate on real defects/inconsistencies >> in the standard on the short term and consider new proposals >> on long term as they need more thoroughful consideration anyway. > >On that note, I think that the whole section about record >and set types has to be revised, explaining what an 'unordered' >and 'ordered' structure type should signify, as the intuitive >meaning of these do not seem to be apply. > >Also, it is not clear why port and component types cannot >be generic. > >Jacob > |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 16 Oct 2001 13:56 #5961
|
Regarding the following point made by Jacob 'Ugh' Wieland
[This email address is being protected from spambots. You need JavaScript enabled to view it.]: > In the BNF: > I would strongly suggest that the length restriction for >'record of' and 'set of' types should have the same syntax >as that for string subtypes (where the length restriction >is placed BEHIND the defined type identifier instead of BEFORE >it). >All these types are some sort of 'list' type (which is why they >share the length restriction) and thus should be used similarly >so as not to confuse the user of the language. Please note the proposed solution is not possible for exactly the reason that a 'record of' can be a 'record of' string type and would therefore be a list of lists and require two possible length constraints. e.g. In the following type there are two lengths I need to constrain. The length (no of elements) of the 'record of' and the length of the octet strings type record of octetstring MyExampleType If I wish to constrain the octet string then I write: type record of octetstring MyExampleType length(7) Which is exactly the same as the subtype syntax. Now if I wish to constrain the record of I write: type record length(9) of octetstring MyExampleType Note1: this is not some arbitrary syntax choice but directly derived from ASN.1 Note2: You may notice that the current BNF is wrong anyway because it puts the length constraint after the 'of' BR Colin. |
Please Log in to join the conversation. |
Draft texts for TTCN-3 edition 2 18 Oct 2001 11:29 #5989
|
On Tue, 16 Oct 2001, Colin Willcock wrote:
> Regarding the following point made by Jacob 'Ugh' Wieland > [This email address is being protected from spambots. You need JavaScript enabled to view it.]: > > > In the BNF: > > > I would strongly suggest that the length restriction for > >'record of' and 'set of' types should have the same syntax > >as that for string subtypes (where the length restriction > >is placed BEHIND the defined type identifier instead of BEFORE > >it). > > >All these types are some sort of 'list' type (which is why they > >share the length restriction) and thus should be used similarly > >so as not to confuse the user of the language. > > Please note the proposed solution is not possible for exactly the > reason that a 'record of' can be a 'record of' string type and would > therefore be a list of lists and require two possible length constraints. Now, you're confusing me totally. As I had only read the proposed sections of the review (and the proposals on this list) and not every single rule in the updated BNF, I was not aware of the change that a SubTypeSpec could be at the end of a StructOfDefBody. But, if that is the case, then knowing the rest of TTCN-3 (especially the SubtypeDef), I would assume that the restriction is to the type to be declared and not the element-type it is derived of. Some examples what the restrictions can be used to do (in my up-till-now understanding): type charstring Type_1 length(10) type charstring Type_2 ("abc", "def") type record of integer Type_3 length(10) type record of integer Type_4 ({1,2,3},{4,5,6}) type record of charstring Type_5 length(10) type record of charstring Type_6 ({"abc","def"}, {"ghi", "jkl"}) If you wanted to restrict the _element_ type additionally, I would assume it far more logical to append that restriction to that type in the declaration: type record of charstring length(10) Type_7 lenght(10) which would mean the same as: type record of Type_1 Type_7 length(10), i.e. a type of records of length 10 of charstrings of the length 10. If what you are saying shall be the case, then this makes the language design for record of types and string types very unorthogonal and thus probably not very understandable, especially if you have not sufficient examples to explain the different statements. Up till now, I thought the general approach to type definition was: type <type> <identifier> [<parameters>] [<body>|<restriction>] where <type> could be union, record, set, record of <type>, set of <type>, <stringtype>, component, port, enumerated, <type_identifier> or <type_instance>. In the case of record of and set of types, this could be modified to 'record of <type> [<restriction>]' and 'set of <type> [<restriction>]', but this is not necessary as a type with that restriction could have been declared by itself and used in the record of declaration. To summarize my proposal: Either: leave out the (in my view unnecessary) possible element-type restrictions in record of and set of type declarations Or: append the element-type restrictios in record and set of type declarations to the element type and not to the type declaration (i.e. the declared record of or set of type) In any case: let the subtype-restriction appended to record of and set of type declarations mean the same as for string-type declarations: restriction on the declared type, not the contents! In terms of the BNF: RecordOfDef ::= RecordKeyword OfKeyword StructOfDefBody SetOfDef ::= SetKeyword OfKeyword StructOfDefBody StructOfDefBody ::= Type [SubTypeSpec] (StructTypeIdentifier | AddressKeyword) [SubTypeSpec] /* Static semantics: The first optional SubTypeSpec restricts the values of the content elements of the struct-of type to be declared. The second optional SubTypeSpec restricts the length or values of the struct-of type to be declared. */ Greetings, Jacob Wieland, TU-Berlin PS: > e.g. In the following type there are two lengths I need to constrain. The > length (no of elements) of the 'record of' and the length of the octet > strings > > type record of octetstring MyExampleType > > If I wish to constrain the octet string then I write: > > type record of octetstring MyExampleType length(7) > > Which is exactly the same as the subtype syntax. It may be the same syntax, but it is used with a different semantics which makes it so confusing to me. |
Please Log in to join the conversation. |
|
|