Sizing methods
Measuring the application becomes more and more common as people understand the role of such metrics. Knowing each software's functional size, we can discover a lot of useful information like the time needed for introducing modifications or developing a new version of program, for instance. That's why people pay more and more attention to numerous functional measurement methods which are more or less accurate, nonetheless each of them leads to the same point - knowing the number of functional points which inform about application size. Due to that, different sizing methods have been worked out.
The boundaries to be established
Regardless of which method we choose, there are a few things which we should think about before even doing anything.
The first, and the most important simultaneously, seems obvious, nonetheless is crucial for further calculations, thereupon disregarding it would be absolutely light-minded.
Due to that, there is a need to attach weight to application boundaries. Today's IT systems are bigger and bigger, more and more complex, and the net of connections with other software is more and more meaningful. Thereupon, sometimes it is difficult to distinguish one's application's end, and another one's beginning. It's not as meaningful in practice as in case of measuring. In real "life", applications walk together and their end users do not need to think about their boundaries. In case of measuring, the situation is different, because the thing is to measure specific applications and - especially - their parts, what means no external additions included. In practice, the first problem during the measuring process which people come into is separating the application they want to measure from its surroundings.
Basically, establishing the boundaries shouldn't be problematical, nonetheless to avoid any misunderstandings, a few questions are to be answered:
- Why are you measuring the application?
- How the application maintain data?
- What business areas is the application connected with?
The
answers, if well thought out, should tell you everything about the boundaries.
There are thousands of ways, but your thing is to choose the only appropriate
one. Thereupon, whatever you're actually doing, think about the purpose
of function point count. Remembering about it, you'd be able to focus
on what truly is important in case of your measurement.
Identifying RET's, DET's, and FTR's
Before analyzing the sizing method, it would be useful to explain what are the title RETs, DETs, and FTRs.
- RET (Record Element Type) -it's a sub group of data elements recognizable by the user, the most easily identifiable on the background of data logical groupings.
- DET (Data Element Type) - basically, it's a dynamic information, recognizable, non-repetitive field.
- FTR (File Type Referenced) - these are files which are referenced by transactions.
In case of measuring the applications, comprehending the types stated above is crucial - it helps with distinguishing the transactions what impacts the function points count.
While it's known that application can consist of five types of components (External Inputs, External Outputs, External Inquiries, External Interface Files, and Internal Logical Files), each of the component should be rated based upon Data Element Types, and FTRs or RETs (it depends).
DETs to consider (GUI):
- radio buttons
- check boxes
- command buttons
- displays, graphical images, icons
- sound bytes
- photographic images
- messages
In case of real time systems, the situation is more difficult as there's no strict rules provided by IFPUG manual.
The best it would be to define RETs and DETs as soon as possible. It's suggested then to rate all data function types and transactional function types as average or to try to compare the similar functions counted in other applications of the same type.
All
in all, it must be remembered that this is probably the most difficult
stage of application measurement process, therefore we shouldn't be
afraid of asking more experienced measurers for help. At least, unless
we gain our own experience.
Identifying External Inputs
The External Inputs have already been mentioned, but it wouldn't be pointless to remind what exactly they are. Basically, they're the processes, and their characteristic can be boiled down to crossing the boundary. External - because the "move" is from outside to inside the application. The what exactly we should attach weight to are:
- data input fields regardless of their forms
- error or confirmation messages
- calculated values
- derived data
- action keys
Simultaneously, the thing is to count data elements which perform exactly the same function as one DET. So is with multiple action keys, and so is with recursive fields. Similar rule should be followed in case of distinguishing File Types Referenced - counting one type a few times would only interfere the rating results.
Once we know the External Inputs, it's obvious we should rate them, and the rules are simple, presented in the table below:
Rating the External Inputs
FTR (File Types Referenced) | DET (Data Elements) | |||||
1 - 4 | 5 - 15 | More than 15 | ||||
Less than 2 | Low | 3 | Low | 3 | Avg. | 4 |
Exactly 2 | Low | 3 | Avg. | 4 | High | 6 |
More than 2 | Avg. | 4 | High | 6 | High | 6 |
The most difficult it is to distinguish the External Inputs, and as soon as we have them chosen, the rest rather is easy. All we need is to follow the rules described in the table. For example, when we have exactly two File Types Referenced, and 21 Data Elements connected with them, then we should rate it as high, and score it 6. Another example - 5 File Types Referenced and - simultaneously - only 3 Data Elements. In such case, the component should be rated as average, and given the score 4.
The application measurement methods definitely are useful, nonetheless their proper performing always requires a lot of time. They're also thought to be difficult - even the littlest mistakes impact the final results what - followed with low accuracy of measurement - make FSM methods only estimated. All in all, it's not to be afraid of them, but to pay attention to every detail and be likely to repeat each stage of the process if only something is doubted during it.