This repository has been archived by the owner on Dec 16, 2022. It is now read-only.
forked from AY2021S2-CS2103T-W13-1/tp
-
Notifications
You must be signed in to change notification settings - Fork 0
/
dictionarybook.json
1147 lines (1147 loc) · 202 KB
/
dictionarybook.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
{
"content" : [ {
"title": "Requirements",
"header" : "Intro",
"maincontent" : "A software requirement specifies a need to be fulfilled by the software product.\n\nA software project may be,\n\na brown-field project i.e., develop a product to replace/update an existing software product\na green-field project i.e., develop a totally new system with no precedent\nIn either case, requirements need to be gathered, analyzed, specified, and managed.\n\nRequirements come from stakeholders.\n\nIdentifying requirements is often not easy. For example, stakeholders may not be aware of their precise needs, may not know how to communicate their requirements correctly, may not be willing to spend effort in identifying requirements, etc."
},{
"title": "Requirements",
"header" : "Non-functional Requirements",
"maincontent" : "Requirements can be divided into two in the following way:\n\nFunctional requirements specify what the system should do.\nNon-functional requirements specify the constraints under which the system is developed and operated.\n\nYou may have to spend an extra effort in digging NFRs out as early as possible because,\n\nNFRs are easier to miss e.g., stakeholders tend to think of functional requirements first\nsometimes NFRs are critical to the success of the software. E.g. A web application that is too slow or that has low security is unlikely to succeed even if it has all the right functionality.\n"
},{
"title": "Requirements",
"header" : "Quality of Requirements",
"maincontent" : "Here are some characteristics of well-defined requirements:\n\nUnambiguous\nTestable (verifiable)\nClear (concise, terse, simple, precise)\nCorrect\nUnderstandable\nFeasible (realistic, possible)\nIndependent\nAtomic\nNecessary\nImplementation-free (i.e. abstract)\nBesides these criteria for individual requirements, the set of requirements as a whole should be\n\nConsistent\nNon-redundant\nComplete\n"
},{
"title": "Requirements",
"header" : "Prioritizing Requirements",
"maincontent" : "Requirements can be prioritized based on the importance and urgency, while keeping in mind the constraints of schedule, budget, staff resources, quality goals, and other constraints.\n\nA common approach is to group requirements into priority categories. Note that all such scales are subjective, and stakeholders define the meaning of each level in the scale for the project at hand.\n\nSome requirements can be discarded if they are considered ‘out of scope’.\n\n"
},{
"title": "Gathering Requirements",
"header" : "Brainstorming",
"maincontent" : "In a brainstorming session there are no \"bad\" ideas. The aim is to generate ideas; not to validate them. Brainstorming encourages you to \"think outside the box\" and put \"crazy\" ideas on the table without fear of rejection.\n\n"
},{
"title": "Gathering Requirements",
"header" : "Product Surveys",
"maincontent" : "Studying existing products can unearth shortcomings of existing solutions that can be addressed by a new product. Product manuals and other forms of documentation of an existing system can tell us how the existing solutions work.\n\n"
},{
"title": "Gathering Requirements",
"header" : "Observation",
"maincontent" : "Observing users in their natural work environment can uncover product requirements. Usage data of an existing system can also be used to gather information about how an existing system is being used, which can help in building a better replacement e.g. to find the situations where the user makes mistakes when using the current system.\n\n"
},{
"title": "Gathering Requirements",
"header": "User Surveys",
"maincontent": "Surveys can be used to solicit responses and opinions from a large number of stakeholders regarding a current product or a new product."
},{
"title": "Gathering Requirements",
"header": "Interviews",
"maincontent": "Interviewing stakeholders and domain experts can produce useful information about project requirements.\n\n"
},{
"title": "Gathering Requirements",
"header": "Focus Groups",
"maincontent": "Focus groups are a kind of informal interview within an interactive group setting. A group of people (e.g. potential users, beta testers) are asked about their understanding of a specific issue, process, product, advertisement, etc.\n\n"
},{
"title": "Gathering Requirements",
"header": "Prototyping",
"maincontent": "Prototyping can uncover requirements, in particular, those related to how users interact with the system. UI prototypes or mock ups are often used in brainstorming sessions, or in meetings with the users to get quick feedback from them.\n\n"
},{
"title": "Specifying Requirements",
"header": "Prose - What",
"maincontent": "A textual description (i.e. prose) can be used to describe requirements. Prose is especially useful when describing abstract ideas such as the vision of a product.\n\n"
},{
"title": "Specifying Requirements",
"header": "Feature Lists - What",
"maincontent": "Feature list: A list of features of a product grouped according to some criteria such as aspect, priority, order of delivery, etc.\n\n"
},{
"title": "Specifying Requirements",
"header": "User Stories - Introduction",
"maincontent": "User story: User stories are short, simple descriptions of a feature told from the perspective of the person who desires the new capability, usually a user or customer of the system. \nA common format for writing user stories is:\n\nUser story format: As a {user type/role} I can {function} so that {benefit}\n\nYou can write user stories on index cards or sticky notes, and arrange them on walls or tables, to facilitate planning and discussion. Alternatively, you can use a software (e.g., GitHub Project Boards, Trello, Google Docs, ...) to manage user stories digitally.\n\n "
},{
"title": "Specifying Requirements",
"header" : "User Stories - Details",
"maincontent": "The {benefit} can be omitted if it is obvious.\n\n It is recommended to confirm there is a concrete benefit even if you omit it from the user story. If not, you could end up adding features that have no real benefit.\n\nYou can add more characteristics to the {user role} to provide more context to the user story.\n\nYou can write user stories at various levels. High-level user stories, called epics (or themes) cover bigger functionality. You can then break down these epics to multiple user stories of normal size.\n\nYou can add conditions of satisfaction to a user story to specify things that need to be true for the user story implementation to be accepted as ‘done’.\n\nOther useful info that can be added to a user story includes (but not limited to)\n\nPriority: how important the user story is\nSize: the estimated effort to implement the user story\nUrgency: how soon the feature is needed."
},{
"title": "Specifying Requirements",
"header" : "User Stories - Usage",
"maincontent": "User stories capture user requirements in a way that is convenient for scoping, estimation, and scheduling.\n\nUser stories differ from traditional requirements specifications mainly in the level of detail. User stories should only provide enough details to make a reasonably low risk estimate of how long the user story will take to implement. When the time comes to implement the user story, the developers will meet with the customer face-to-face to work out a more detailed description of the requirements.\nUser stories can capture non-functional requirements too because even NFRs must benefit some stakeholder.\nGiven their lightweight nature, user stories are quite handy for recording requirements during early stages of requirements gathering.\n\nWhile use cases can be recorded on physical paper in the initial stages, an online tool is more suitable for longer-term management of user stories, especially if the team is not co-located.\n\n"
},{
"title": "Specifying Requirements",
"header" : "Glossary - What",
"maincontent": "Glossary: A glossary serves to ensure that all stakeholders have a common understanding of the noteworthy terms, abbreviations, acronyms etc.\n\n"
},{
"title": "Specifying Requirements",
"header" : "Supplementary Requirements - What",
"maincontent": "A supplementary requirements section can be used to capture requirements that do not fit elsewhere. Typically, this is where most Non-Functional Requirements will be listed.\n\n"
},{
"title": "Design",
"header" : "Software Design - Introduction - What",
"maincontent": "Software design has two main aspects:\n\n1. Product/external design: designing the external behavior of the product to meet the users' requirements. This is usually done by product designers with input from business analysts, user experience experts, user representatives, etc.\n\n2. Implementation/internal design: designing how the product will be implemented to meet the required external behavior. This is usually done by software architects and software engineers.\n\n"
},{
"title": "Design",
"header" : "Design Fundamentals - Abstraction - What",
"maincontent": "Abstraction is a technique for dealing with complexity. It works by establishing a level of complexity we are interested in, and suppressing the more complex details below that level.\n\nThe guiding principle of abstraction is that only details that are relevant to the current perspective or the task at hand need to be considered. As most programs are written to solve complex problems involving large amounts of intricate details, it is impossible to deal with all these details at the same time. That is where abstraction can help.\n\nData abstraction: abstracting away the lower level data items and thinking in terms of bigger entities\n\nControl abstraction: abstracting away details of the actual control flow to focus on tasks at a higher level\n\nAbstraction can be applied repeatedly to obtain progressively higher levels of abstraction.\n\nAbstraction is a general concept that is not limited to just data or control abstractions.\n\n"
},{
"title": "Design",
"header" : "Design Fundamentals - Coupling - What",
"maincontent": "Coupling is a measure of the degree of dependence between components, classes, methods, etc. Low coupling indicates that a component is less dependent on other components. High coupling (aka tight coupling or strong coupling) is discouraged due to the following disadvantages:\n\n1. Maintenance is harder because a change in one module could cause changes in other modules coupled to it (i.e. a ripple effect).\n\n2. Integration is harder because multiple components coupled with each other have to be integrated at the same time.\n\n3. Testing and reuse of the module is harder due to its dependence on other modules.\n\n"
},{
"title": "Design",
"header" : "Design Fundamentals - Coupling - How",
"maincontent": "X is coupled to Y if a change to Y can potentially require a change in X.\n\n"
},{
"title": "Design",
"header" : "Design Fundamentals - Coupling - Types of Coupling",
"maincontent": "Some examples of different coupling types:\n\n1. Content coupling: one module modifies or relies on the internal workings of another module e.g., accessing local data of another module\n\n2. Common/Global coupling: two modules share the same global data\n\n3. Control coupling: one module controlling the flow of another, by passing it information on what to do e.g., passing a flag\n\n4. Data coupling: one module sharing data with another module e.g. via passing parameters\n\n5. External coupling: two modules share an externally imposed convention e.g., data formats, communication protocols, device interfaces.\n\n6. Subclass coupling: a class inherits from another class. Note that a child class is coupled to the parent class but not the other way around.\n\n7. Temporal coupling: two actions are bundled together just because they happen to occur at the same time e.g. extracting a contiguous block of code as a method although the code block contains statements unrelated to each other\n\n"
},{
"title": "Design",
"header" : "Design Fundamentals - Cohesion - What",
"maincontent": "Cohesion is a measure of how strongly-related and focused the various responsibilities of a component are. A highly-cohesive component keeps related functionalities together while keeping out all other unrelated things.\n\nHigher cohesion is better. Disadvantages of low cohesion (aka weak cohesion):\n\n1. Lowers the understandability of modules as it is difficult to express module functionalities at a higher level.\n\nLowers maintainability because a module can be modified due to unrelated causes (reason: the module contains code unrelated to each other) or many modules may need to be modified to achieve a small change in behavior (reason: because the code related to that change is not localized to a single module).\n\nLowers reusability of modules because they do not represent logical units of functionality.\n\n"
},{
"title": "Design",
"header" : "Design Fundamentals - Cohesion - How",
"maincontent": "Cohesion can be present in many forms. Some examples:\n\n1. Code related to a single concept is kept together, e.g. the Student component handles everything related to students.\n\n2. Code that is invoked close together in time is kept together, e.g. all code related to initializing the system is kept together.\n\n3. Code that manipulates the same data structure is kept together, e.g. the GameArchive component handles everything related to the storage and retrieval of game sessions.\n\n"
},{
"title": "Design",
"header" : "Modeling - Introduction - What",
"maincontent": "A model is a representation of something else.\n\nA model provides a simpler view of a complex entity because a model captures only a selected aspect. This omission of some aspects implies models are abstractions.\n\nMultiple models of the same entity may be needed to capture it fully.\n\n"
},{
"title": "Design",
"header" : "Modeling - Introduction - How",
"maincontent": "In software development, models are useful in several ways:\n\na. To analyze a complex entity related to software development.\n\nb. To communicate information among stakeholders. Models can be used as a visual aid in discussions and documentation.\n\nc. As a blueprint for creating software. Models can be used as instructions for building software.\n\n"
},{
"title": "Design",
"header" : "Modeling - Introduction - UML Models",
"maincontent": "Unified Modeling Language (UML) is a graphical notation to describe various aspects of a software system. UML is the brainchild of three software modeling specialists James Rumbaugh, Grady Booch and Ivar Jacobson (also known as the Three Amigos). Each of them had developed their own notation for modeling software systems before joining forces to create a unified modeling language (hence, the term ‘Unified’ in UML). UML is currently the de facto modeling notation used in the software industry.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Structures - OO Structures",
"maincontent": "An OO solution is basically a network of objects interacting with each other. Therefore, it is useful to be able to model how the relevant objects are 'networked' together inside a software i.e. how the objects are connected together.\n\nNote that these object structures within the same software can change over time.\n\nHowever, object structures do not change at random; they change based on a set of rules, as was decided by the designer of that software. Those rules that object structures need to follow can be illustrated as a class structure i.e. a structure that exists among the relevant classes.\n\nUML Object Diagrams are used to model object structures and UML Class Diagrams are used to model class structures of an OO solution.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Structures - Class Diagrams (Basics, Part 1)",
"maincontent": "UML class diagrams describe the structure (but not the behavior) of an OOP solution. These are possibly the most often used diagrams in the industry and are an indispensable tool for an OO programmer.\n\nClasses form the basis of class diagrams.\n\nThe basic UML notations used to represent a class consists of three compartments: 'Class Name', 'Attributes', and 'Methods' ('Operations').\n\nThe 'Operations' compartment and/or the 'Attributes' compartment may be omitted if such details are not important for the task at hand. 'Attributes' always appear above the 'Operations' compartment. All operations should be in one compartment rather than each operation in a separate compartment. Same goes for attributes.\n\nThe visibility of attributes and operations is used to indicate the level of access allowed for each attribute or operation. The types of visibility and their exact meanings depend on the programming language used. Here are some common visibilities and how they are indicated in a class diagram:\n\n+ : public\n\n- : private\n\n# : protected\n\n~ : package private\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Structures - Class Diagrams (Basics, Part 2)",
"maincontent": "The notation format of generic classes is drawn as a box with a dashed outline on the top-right corner of the class's box.\n\nIn UML class diagrams, underlines denote class-level attributes and variables.\n\nAssociations are the main connections among the classes in a class diagram.\n\nThe most basic class diagram is a bunch of classes with some solid lines among them to represent associations.\n\nIn addition, associations can show additional decorations such as association labels, association roles, multiplicity and navigability to add more information to a class diagram.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Structures - Adding More Info to UML Models",
"maincontent": "UML notes can be used to add more info to any UML model.\n\nUML notes can augment UML diagrams with additional information. These notes can be shown connected to a particular element in the diagram or can be shown without a connection\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Structures - Class Diagrams (Intermediate)",
"maincontent": "A class diagram can also show different types of relationships between classes: inheritance, compositions, aggregations, dependencies.\n\nA class diagram can also show different types of class-like entities: enumerations, abstract classes, and interfaces.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Structures - Class Diagrams (Advanced)",
"maincontent": "A class diagram can show association classes too.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Structures - Object Diagrams",
"maincontent": "Object diagrams can be used to complement class diagrams. For example, you can use object diagrams to model different object structures that can result from a design represented by a given class diagram.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Structures - Object Oriented Domain Models",
"maincontent": "Class diagrams can also be used to model objects in the problem domain (i.e. to model how objects actually interact in the real world, before emulating them in the solution). Class diagrams that are used to model the problem domain are called conceptual class diagrams or OO domain models (OODMs).\n\nOODMs do not contain solution-specific classes (i.e. classes that are used in the solution domain but do not exist in the problem domain). For example, a class called DatabaseConnection could appear in a class diagram but not usually in an OO domain model because DatabaseConnection is something related to a software solution but not an entity in the problem domain.\n\nOODMs represents the class structure of the problem domain and not their behavior, just like class diagrams. To show behavior, use other diagrams such as sequence diagrams.\n\nOODM notation is similar to class diagram notation but omit methods and navigability.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Structures - Deployment Diagrams",
"maincontent": "A deployment diagram shows a system's physical layout, revealing which pieces of software run on which pieces of hardware.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Structures - Component Diagrams",
"maincontent": "A component diagram is used to show how a system is divided into components and how they are connected to each other through interfaces.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Structures - Package Diagrams",
"maincontent": "A package diagram shows packages and their dependencies. A package is a grouping construct for grouping UML elements (classes, use cases, etc.).\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Structures - Composite Structure Diagrams",
"maincontent": "A composite structure diagram hierarchically decomposes a class into its internal structure.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Behaviours - Activity Diagrams (Basic, Part 1)",
"maincontent": "Software projects often involve workflows. Workflows define the flow in which a process or a set of tasks is executed. Understanding such workflows is important for the success of the software project.\n\nUML activity diagrams (AD) can model workflows. Flow charts are another type of diagram that can model workflows. Activity diagrams are the UML equivalent of flow charts.\n\nAn activity diagram (AD) captures an activity through the actions and control flows that make up the activity.\n\nAn action is a single step in an activity. It is shown as a rectangle with rounded corners.\n\nA control flow shows the flow of control from one action to the next. It is shown by drawing a line with an arrow-head to show the direction of the flow.\n\nNote the slight difference between the start node and the end node which represent the start and the end of the activity, respectively, in which the end node is surrounded by a circle outline while the start node is not.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Behaviours - Activity Diagrams (Basic, Part 2)",
"maincontent": "A branch node shows the start of alternate paths. Each control flow exiting a branch node has a guard condition: a boolean condition that should be true for execution to take that path. Exactly one of the guard conditions should be true an any.\n\nA merge node shows the end of alternate paths.\n\nBoth branch nodes and merge nodes are diamond shapes. Guard conditions must be in square brackets.\n\nSome acceptable simplifications (by convention):\n\n1. Omitting the merge node if it doesn't cause any ambiguities.\n\n2. Multiple arrows can starting from the same corner of a branch node.\n\nOmitting the [Else] condition.\n\nFork nodes indicate the start of concurrent flows of control.\n\nJoin nodes indicate the end of parallel paths.\n\nBoth have the same notation: a bar.\n\nIn a set of parallel paths, execution along all parallel paths should be complete before the execution can start on the outgoing control flow of the join.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Behaviours - Activity Diagrams (Intermediate)",
"maincontent": "The rake notation is used to indicate that a part of the activity is given as a separate diagram.\n\nIt is possible to partition an activity diagram to show who is doing which action. Such partitioned activity diagrams are sometime called swimlane diagrams.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Behaviours - Sequence Diagrams (Basic)",
"maincontent": "A UML sequence diagram captures the interactions between multiple objects for a given scenario.\n\nArrows representing method calls should be solid arrows while those representing method returns should be dashed arrows.\n\nNote that unlike in object diagrams, the class/object name is not underlined in sequence diagrams.\n\nThe arrow that represents a constructor arrives at the side of the box representing the instance.\n\nThe activation bar represents the period the constructor is active.\n\nTo reduce clutter, activation bars and return arrows may be omitted if they do not result in ambiguities or loss of relevant information. Informal operation descriptions can be used, if more precise details are not required for the task at hand.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Behaviours - Sequence Diagrams (Intermediate)",
"maincontent": "UML uses an X at the end of the lifeline of an object to show its deletion.\n\nUML can show a method of an object calling another of its own methods.\n\nUML uses alt frames to indicate alternative paths.\n\nUML uses opt frames to indicate optional paths.\n\nMethod calls to static (i.e., class-level) methods are received by the class itself, not an instance of that class. You can use <<class>> to show that a participant is the class itself.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Behaviours - Use Case Diagrams",
"maincontent": "Use case diagrams model the mapping between features of a system and its user roles.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Behaviours - Timing Diagrams",
"maincontent": "A timing diagram focuses on timing constraints.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Behaviours - Interaction Overview Diagrams",
"maincontent": "Interaction overview diagrams are a combination of activity diagrams and sequence diagrams.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Behaviours - Communication Diagrams",
"maincontent": "Communication diagrams are like sequence diagrams but emphasize the data links between the various participants in the interaction rather than the sequence of interactions.\n\n"
},{
"title": "Design",
"header" : "Modeling - Modeling Behaviours - State Machine Diagrams",
"maincontent": "A State Machine Diagram models state-dependent behavior.\n\nOften, state-dependent behavior displayed by an object in a system is simple enough that it needs no extra attention; such a behavior can be as simple as a conditional behavior like if x > y, then x = x - y.\n\nOccasionally, objects may exhibit state-dependent behavior that is complex enough such that it needs to be captured in a separate model. Such state-dependent behavior can be modeled using UML state machine diagrams (SMD for short, sometimes also called ‘state charts’, ‘state diagrams’ or ‘state machines’).\n\nOccasionally, objects may exhibit state-dependent behavior that is complex enough such that it needs to be captured into a separate model. Such state-dependent behavior can be modelled using UML state machine diagrams (SMD for short, sometimes also called ‘state charts’, ‘state diagrams’ or ‘state machines’).\n\nAn SMD views the life-cycle of an object as consisting of a finite number of states where each state displays a unique behavior pattern. SMDs capture information such as the states an object can be in during its lifetime, how the object responds to various events while in each state, and how the object transits from one state to another. In contrast to sequence diagrams that capture object behavior one scenario at a time, SMDs capture the object’s behavior over its full life-cycle.\n\n"
},{
"title": "Design",
"header" : "Software Architecture - Introudction - What",
"maincontent": "The software architecture shows the overall organization of the system and can be viewed as a very high-level design. It usually consists of a set of interacting components that fit together to achieve the required functionality. It should be a simple and technically viable structure that is well-understood and agreed-upon by everyone in the development team, and it forms the basis for the implementation.\n\nThe architecture is typically designed by the software architect, who provides the technical vision of the system and makes high-level (i.e. architecture-level) technical decisions about the project.\n\n"
},{
"title": "Design",
"header" : "Software Architecture - Architecture Diagrams - Reading",
"maincontent": "Architecture diagrams are free-form diagrams. There is no universally adopted standard notation for architecture diagrams. Any symbols that reasonably describe the architecture may be used.\n\n"
},{
"title": "Design",
"header" : "Software Architecture - Architecture Diagrams - Drawing",
"maincontent": "While architecture diagrams have no standard notation, try to follow these basic guidelines when drawing them.\n\nMinimize the variety of symbols. If the symbols you choose do not have widely-understood meanings e.g. A drum symbol is widely-understood as representing a database, explain their meaning.\n\nAvoid the indiscriminate use of double-headed arrows to show interactions between components.\n\n"
},{
"title": "Design",
"header" : "Software Architecture - Architecture Styles - Introduction",
"maincontent": "Software architectures follow various high-level styles (aka architectural patterns), just like how building architectures follow various architecture styles.\n\n"
},{
"title": "Design",
"header" : "Software Architecture - Architecture Styles - N-tier Architectural Style",
"maincontent": "In the n-tier style, higher layers make use of services provided by lower layers. Lower layers are independent of higher layers. Other names: multi-layered, layered.\n\n"
},{
"title": "Design",
"header" : "Software Architecture - Architecture Styles - Client-Server Architectural Style",
"maincontent": "The client-server style has at least one component playing the role of a server and at least one client component accessing the services of the server. This is an architectural style used often in distributed applications.\n\n"
},{
"title": "Design",
"header" : "Software Architecture - Architecture Styles - Transaction Processing Architectural Style",
"maincontent": "The transaction processing style divides the workload of the system down to a number of transactions which are then given to a dispatcher that controls the execution of each transaction. Task queuing, ordering, undo etc. are handled by the dispatcher.\n\n"
},{
"title": "Design",
"header" : "Software Architecture - Architecture Styles - Service-oriented Architectural Style",
"maincontent": "The service-oriented architecture (SOA) style builds applications by combining functionalities packaged as programmatically accessible services. SOA aims to achieve interoperability between distributed services, which may not even be implemented using the same programming language. A common way to implement SOA is through the use of XML web services where the web is used as the medium for the services to interact, and XML is used as the language of communication between service providers and service users.\n\n"
},{
"title": "Design",
"header" : "Software Architecture - Architecture Styles - Event-driven Architectural Style",
"maincontent": "Event-driven style controls the flow of the application by detecting events from event emitters and communicating those events to interested event consumers. This architectural style is often used in GUIs.\n\n"
},{
"title": "Design",
"header" : "Software Architecture - Architecture Styles - More Styles",
"maincontent": "Other well-known architectural styles include the pipes-and-filters architecture, the broker architecture, the peer-to-peer architecture, and the message-oriented architecture.\n\n"
},{
"title": "Design",
"header" : "Software Architecture - Architecture Styles - Using Styles",
"maincontent": "Most applications use a mix of these architectural styles.\n\n"
},{
"title": "Design",
"header" : "Software Design Patterns - Introduction - What",
"maincontent": "Design pattern: An elegant reusable solution to a commonly recurring problem within a given context in software design.\n\nIn software development, there are certain problems that recur in a certain context.\n\nAfter repeated attempts at solving such problems, better solutions are discovered and refined over time. These solutions are known as design patterns, a term popularized by the seminal book Design Patterns: Elements of Reusable Object-Oriented Software by the so-called \"Gang of Four\" (GoF) written by Eric Gamma, Richard Helm, Ralph Johnson, and John Vlissides.\n\n"
},{
"title": "Design",
"header" : "Software Design Patterns - Introduction - Format",
"maincontent": "The common format to describe a pattern consists of the following components:\n\nContext: The situation or scenario where the design problem is encountered.\nProblem: The main difficulty to be resolved.\nSolution: The core of the solution. It is important to note that the solution presented only includes the most general details, which may need further refinement for a specific context.\nAnti-patterns (optional): Commonly used solutions, which are usually incorrect and/or inferior to the Design Pattern.\nConsequences (optional): Identifying the pros and cons of applying the pattern.\nOther useful information (optional): Code examples, known uses, other related patterns, etc.\n\n"
},{
"title": "Design",
"header" : "Software Design Patterns - Singleton Pattern - What",
"maincontent": "Context\n\nCertain classes should have no more than just one instance (e.g. the main controller class of the system). These single instances are commonly known as singletons.\n\nProblem\n\nA normal class can be instantiated multiple times by invoking the constructor.\n\nSolution\n\nMake the constructor of the singleton class private, because a public constructor will allow others to instantiate the class at will. Provide a public class-level method to access the single instance.\n\n"
},{
"title": "Design",
"header" : "Software Design Patterns - Singleton Pattern - Evaluation",
"maincontent": "Pros:\n\neasy to apply\neffective in achieving its goal with minimal extra work\nprovides an easy way to access the singleton object from anywhere in the code base\n\nCons:\n\nThe singleton object acts like a global variable that increases coupling across the code base.\nIn testing, it is difficult to replace Singleton objects with stubs (static methods cannot be overridden).\nIn testing, singleton objects carry data from one test to another even when you want each test to be independent of the others.\nGiven that there are some significant cons, it is recommended that you apply the Singleton pattern when, in addition to requiring only one instance of a class, there is a risk of creating multiple objects by mistake, and creating such multiple objects has real negative consequences.\n\n"
},{
"title": "Design",
"header" : "Software Design Patterns - Abstraction Occurrence Pattern - What",
"maincontent": "Context\n\nThere is a group of similar entities that appear to be ‘occurrences’ (or ‘copies’) of the same thing, sharing lots of common information, but also differing in significant ways.\n\nProblem\n\nRepresenting the objects mentioned previously as a single class would be problematic because it results in duplication of data which can lead to inconsistencies in data (if some of the duplicates are not updated consistently).\n\nSolution\n\nThe <<Abstraction>> class should hold all common information, and the unique information should be kept by the <<Occurrence>> class. Note that ‘Abstraction’ and ‘Occurrence’ are not class names, but roles played by each class. Think of this diagram as a meta-model (i.e. a ‘model of a model’) of the BookTitle-BookCopy class diagram given above.\n\n"
},{
"title": "Design",
"header" : "Software Design Patterns - Facade Pattern - What",
"maincontent": "Context\n\nComponents need to access functionality deep inside other components.\n\nProblem\n\nAccess to the component should be allowed without exposing its internal details.\n\nSolution\n\nInclude a Façade class that sits between the component internals and users of the component such that all access to the component happens through the Facade class.\n\n"
},{
"title": "Design",
"header" : "Software Design Patterns - Command Pattern - What",
"maincontent": "Context\n\nA system is required to execute a number of commands, each doing a different task.\n\nProblem\n\nIt is preferable that some part of the code executes these commands without having to know each command type.\n\nSolution\n\nThe essential element of this pattern is to have a general <<Command>> object that can be passed around, stored, executed, etc without knowing the type of command (i.e. via polymorphism).\n\n"
},{
"title": "Design",
"header" : "Software Design Patterns - Model View Controller (MVC) Pattern - What",
"maincontent": "Context\n\nMost applications support storage/retrieval of information, displaying of information to the user (often via multiple UIs having different formats), and changing stored information based on external inputs.\n\nProblem\n\nThe high coupling that can result from the interlinked nature of the features described above.\n\nSolution\n\nDecouple data, presentation, and control logic of an application by separating them into three different components: Model, View and Controller.\n\nView: Displays data, interacts with the user, and pulls data from the model if necessary.\nController: Detects UI events such as mouse clicks and button pushes, and takes follow up action. Updates/changes the model/view when necessary.\nModel: Stores and maintains data. Updates the view if necessary.\n\nNote that in a simple UI where there’s only one view, Controller and View can be combined as one class.\n\nThere are many variations of the MVC model used in different domains.\n\n"
},{
"title": "Design",
"header" : "Software Design Patterns - Observer Pattern - What",
"maincontent": "Context\n\nAn object (possibly more than one) is interested in being notified when a change happens to another object. That is, some objects want to ‘observe’ another object.\n\nProblem\n\nThe ‘observed’ object does not want to be coupled to objects that are ‘observing’ it.\n\nSolution\n\nForce the communication through an interface known to both parties.\n\n"
},{
"title": "Design",
"header" : "Software Design Patterns - More - Combining Design Patterns",
"maincontent": "Design patterns are usually embedded in a larger design and sometimes applied in combination with other design patterns.\n\n"
},{
"title": "Design",
"header" : "Software Design Patterns - More - Other Design Patterns",
"maincontent": "The most famous source of design patterns is the \"Gang of Four\" (GoF) book which contains 23 design patterns divided into three categories:\n\nCreational: About object creation. They separate the operation of an application from how its objects are created.\nAbstract Factory, Builder, Factory Method, Prototype, Singleton\nStructural: About the composition of objects into larger structures while catering for future extension in structure.\nAdapter, Bridge, Composite, Decorator, Façade, Flyweight, Proxy\nBehavioral: Defining how objects interact and how responsibility is distributed among them.\nChain of Responsibility, Command, Interpreter, Template Method, Iterator, Mediator, Memento, Observer, State, Strategy, Visitor\n\n"
},{
"title": "Design",
"header" : "Software Design Patterns - More - Using Design Patterns",
"maincontent": "Design patterns provide a high-level vocabulary to talk about design.\n\nKnowing more patterns is a way to become more ‘experienced’. Aim to learn at least the context and the problem of patterns so that when you encounter those problems you know where to look for a solution.\n\nSome patterns are domain-specific e.g. patterns for distributed applications, some are created in-house e.g. patterns in the company/project and some can be self-created e.g. from past experience.\n\nBe careful not to overuse patterns. Do not throw patterns at a problem at every opportunity. Patterns come with overhead such as adding more classes or increasing the levels of abstraction. Use them only when they are needed. Before applying a pattern, make sure that:\n\n1. There is substantial improvement in the design, not just superficial.\n2. The associated tradeoffs are carefully considered. There are times when a design pattern is not appropriate (or an overkill).\n\n"
},{
"title": "Design",
"header" : "Software Design Patterns - More - Other Types of Patterns",
"maincontent": "The notion of capturing design ideas as \"patterns\" is usually attributed to Christopher Alexander. He is a building architect noted for his theories about design. His book The Timeless Way of Building talks about \"design patterns\" for constructing buildings.\n\nApparently, patterns and anti-patterns are found in the field of building architecture. This is because they are general concepts applicable to any domain, not just software design. In software engineering, there are many general types of patterns: Analysis patterns, Design patterns, Testing patterns, Architectural patterns, Project management patterns, and so on.\n\nIn fact, the abstraction occurrence pattern is more of an analysis pattern than a design pattern, while MVC is more of an architectural pattern.\n\nNew patterns can be created too. If a common problem that needs to be solved frequently leads to a non-obvious and better solution, it can be formulated as a pattern so that it can be reused by others. However, don’t reinvent the wheel; the pattern might already exist.\n\n"
},{
"title": "Design",
"header" : "Software Design Patterns - More - Design Patterns vs. Design Principles",
"maincontent": "Design principles have varying degrees of formality – rules, opinions, rules of thumb, observations, and axioms. Compared to design patterns, principles are more general, have wider applicability, with correspondingly greater overlap among them.\n\n"
},{
"title": "Design",
"header" : "Design Approaches - Multi-level Design - What",
"maincontent": "In a smaller system, the design of the entire system can be shown in one place.\n\nThe design of bigger systems needs to be done/shown at multiple levels.\n\n"
},{
"title": "Design",
"header" : "Design Approaches - Top-Down and Bottom-Up Design - What",
"maincontent": "Multi-level design can be done in a top-down manner, bottom-up manner, or as a mix.\n\n1. Top-down: Design the high-level design first and flesh out the lower levels later. This is especially useful when designing big and novel systems where the high-level design needs to be stable before lower levels can be designed.\n\n2. Bottom-up: Design lower level components first and put them together to create the higher-level systems later. This is not usually scalable for bigger systems. One instance where this approach might work is when designing a variation of an existing system or re-purposing existing components to build a new system.\n\n3. Mix: Design the top levels using the top-down approach but switch to a bottom-up approach when designing the bottom levels.\n\n"
},{
"title": "Design",
"header" : "Design Approaches - Agile Design - What",
"maincontent": "Agile design can be contrasted with full upfront design in the following way:\n\nAgile designs are emergent, they’re not defined up front. Your overall system design will emerge over time, evolving to fulfill new requirements and take advantage of new technologies as appropriate. Although you will often do some initial architectural modeling at the very beginning of a project, this will be just enough to get your team going. This approach does not produce a fully documented set of models in place before you may begin coding. (adapted from agilemodeling.com)\n\n"
},{
"title": "Implementation",
"header": "IDE: Introduction",
"maincontent": "Professional software engineers often write code using Integrated Development Environments (IDEs). IDEs support most development-related work within the same tool (hence, the term integrated).\n\nAn IDE generally consists of:\n\nA source code editor that includes features such as syntax coloring, auto-completion, easy code navigation, error highlighting, and code-snippet generation.\nA compiler and/or an interpreter (together with other build automation support) that facilitates the compilation/linking/running/deployment of a program.\nA debugger that allows the developer to execute the program one step at a time to observe the run-time behavior in order to locate bugs.\nOther tools that aid various aspects of coding e.g. support for automated testing, drag-and-drop construction of UI components, version management support, simulation of the target runtime platform, and modeling support.\nExamples of popular IDEs:\n\nJava: Eclipse, Intellij IDEA, NetBeans\nC#, C++: Visual Studio\nSwift: XCode\nPython: PyCharm\nSome web-based IDEs have appeared in recent times too e.g., Amazon's Cloud9 IDE.\n\nSome experienced developers, in particular those with a UNIX background, prefer lightweight yet powerful text editors with scripting capabilities (e.g. Emacs) over heavier IDEs.\n\n"
},{
"title": "Implementation",
"header": "IDE: Debugging",
"maincontent": "Debugging is the process of discovering defects in the program. Here are some approaches to debugging:\n\n Bad -- By inserting temporary print statements: This is an ad-hoc approach in which print statements are inserted in the program to print information relevant to debugging, such as variable values. e.g. Exiting process() method, x is 5.347. This approach is not recommended due to these reasons:\nIncurs extra effort when inserting and removing the print statements.\nThese extraneous program modifications increase the risk of introducing errors into the program.\nThese print statements, if not removed promptly after the debugging, may even appear unexpectedly in the production version.\n Bad -- By manually tracing through the code: Otherwise known as ‘eye-balling’, this approach doesn't have the cons of the previous approach, but it too is not recommended (other than as a 'quick try') due to these reasons:\nIt is a difficult, time consuming, and error-prone technique.\nIf you didn't spot the error while writing the code, you might not spot the error when reading the code either.\n Good -- Using a debugger: A debugger tool allows you to pause the execution, then step through the code one statement at a time while examining the internal state if necessary. Most IDEs come with an inbuilt debugger. This is the recommended approach for debugging."
},
{
"title": "Implementation",
"header": "Code Quality: Introduction",
"maincontent": "Production code needs to be of high quality. Given how the world is becoming increasingly dependent on software, poor quality code is something no one can afford to tolerate."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline - Maximise Readability: Introduction",
"maincontent": "Among various dimensions of code quality, such as run-time efficiency, security, and robustness, one of the most important is understandability. This is because in any non-trivial software project, code needs to be read, understood, and modified by other developers later on. Even if you do not intend to pass the code to someone else, code quality is still important because you will become a 'stranger' to your own code someday.\n\n Bad code: \nint subsidy() {\n int subsidy;\n if (!age) {\n if (!sub) {\n if (!notFullTime) {\n subsidy = 500;\n } else {\n subsidy = 250;\n }\n } else {\n subsidy = 250;\n }\n } else {\n subsidy = -1;\n }\n return subsidy;\n}\n\nGood code:\nint calculateSubsidy() {\n int subsidy;\n if (isSenior) {\n subsidy = REJECT_SENIOR;\n } else if (isAlreadySubsidized) {\n subsidy = SUBSIDIZED_SUBSIDY;\n } else if (isPartTime) {\n subsidy = FULLTIME_SUBSIDY * RATIO;\n } else {\n subsidy = FULLTIME_SUBSIDY;\n }\n return subsidy;\n}"
},
{
"title": "Implementation",
"header": "Code Quality: Guideline - Maximise Readability: Avoid Long Methods",
"maincontent": "Be wary when a method is longer than the computer screen, and take corrective action when it goes beyond 30 LOC (lines of code). The bigger the haystack, the harder it is to find a needle."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline - Maximise Readability: Avoid Deep Nesting",
"maincontent": "In particular, avoid arrowhead style code.\n\nBad Code:\nint subsidy() {\n int subsidy;\n if (!age) {\n if (!sub) {\n if (!notFullTime) {\n subsidy = 500;\n } else {\n subsidy = 250;\n }\n } else {\n subsidy = 250;\n }\n } else {\n subsidy = -1;\n }\n return subsidy;\n}\n\nGood code:\nint calculateSubsidy() {\n int subsidy;\n if (isSenior) {\n subsidy = REJECT_SENIOR;\n } else if (isAlreadySubsidized) {\n subsidy = SUBSIDIZED_SUBSIDY;\n } else if (isPartTime) {\n subsidy = FULLTIME_SUBSIDY * RATIO;\n } else {\n subsidy = FULLTIME_SUBSIDY;\n }\n return subsidy;\n}"
},
{
"title": "Implementation",
"header": "Code Quality: Guideline - Maximise Readability: Avoid Complicated Expressions",
"maincontent": "Avoid complicated expressions, especially those having many negations and nested parentheses. If you must evaluate complicated expressions, have it done in steps (i.e. calculate some intermediate values first and use them to calculate the final value)."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline - Maximise Readability: Avoid Magic Numbers",
"maincontent": "When the code has a number that does not explain the meaning of the number, it is called a \"magic number\" (as in \"the number appears as if by magic\"). Using a named constant makes the code easier to understand because the name tells us more about the meaning of the number."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline - Maximise Readability: Make the Code Obvious",
"maincontent": "Make the code as explicit as possible, even if the language syntax allows them to be implicit. Here are some examples:\n\n[Java] Use explicit type conversion instead of implicit type conversion.\n[Java, Python] Use parentheses/braces to show groupings even when they can be skipped.\n[Java, Python] Use enumerations when a certain variable can take only a small number of finite values. For example, instead of declaring the variable 'state' as an integer and using values 0, 1, 2 to denote the states 'starting', 'enabled', and 'disabled' respectively, declare 'state' as type SystemState and define an enumeration SystemState that has values 'STARTING', 'ENABLED', and 'DISABLED'."
},
{
"title": "Implementation",
"header": "Code Quality: Intermediate - Structure Code Logically",
"maincontent": "Lay out the code so that it adheres to the logical structure. The code should read like a story. Just like how you use section breaks, chapters and paragraphs to organize a story, use classes, methods, indentation and line spacing in your code to group related segments of the code. For example, you can use blank lines to group related statements together.\n\nSometimes, the correctness of your code does not depend on the order in which you perform certain intermediary steps. Nevertheless, this order may affect the clarity of the story you are trying to tell. Choose the order that makes the story most readable."
},
{
"title": "Implementation",
"header": "Code Quality: Intermediate - Do Not 'Trip Up' Reader",
"maincontent": "Avoid things that would make the reader go ‘huh?’, such as,\n\nunused parameters in the method signature\nsimilar things that look different\ndifferent things that look similar\nmultiple statements in the same line\ndata flow anomalies such as, pre-assigning values to variables and modifying it without any use of the pre-assigned value"
},
{
"title": "Implementation",
"header": "Code Quality: Intermediate - Practice KISSing",
"maincontent": "As the old adage goes, \"keep it simple, stupid” (KISS). Do not try to write ‘clever’ code. For example, do not dismiss the brute-force yet simple solution in favor of a complicated one because of some ‘supposed benefits’ such as 'better reusability' unless you have a strong justification."
},
{
"title": "Implementation",
"header": "Code Quality: Intermediate - Avoid Premature Optimizations",
"maincontent": "Optimizing code prematurely has several drawbacks:\n\nYou may not know which parts are the real performance bottlenecks. This is especially the case when the code undergoes transformations (e.g. compiling, minifying, transpiling, etc.) before it becomes an executable. Ideally, you should use a profiler tool to identify the actual bottlenecks of the code first, and optimize only those parts.\nOptimizing can complicate the code, affecting correctness and understandability.\nHand-optimized code can be harder for the compiler to optimize (the simpler the code, the easier it is for the compiler to optimize). In many cases, a compiler can do a better job of optimizing the runtime code if you don't get in the way by trying to hand-optimize the source code.\nA popular saying in the industry is make it work, make it right, make it fast which means in most cases, getting the code to perform correctly should take priority over optimizing it. If the code doesn't work correctly, it has no value no matter how fast/efficient it is.\nNote that there are cases where optimizing takes priority over other things e.g. when writing code for resource-constrained environments. This guideline is simply a caution that you should optimize only when it is really needed."
},
{
"title": "Implementation",
"header": "Code Quality: Intermediate - SLAP hard",
"maincontent": "Avoid varying the level of abstraction within a code fragment. Note: The book The Productive Programmer (by Neal Ford) calls this the Single Level of Abstraction Principle (SLAP) while the book Clean Code (by Robert C. Martin) calls this One Level of Abstraction per Function."
},
{
"title": "Implementation",
"header": "Code Quality: Advanced - Make the Happy Path Prominent",
"maincontent": "The happy path (i.e. the execution path taken when everything goes well) should be clear and prominent in your code. Restructure the code to make the happy path unindented as much as possible. It is the ‘unusual’ cases that should be indented. Someone reading the code should not get distracted by alternative paths taken when error conditions happen. One technique that could help in this regard is the use of guard clauses.\n\n Make sure to:\ndeal with unusual conditions as soon as they are detected so that the reader doesn't have to remember them for long.\nkeep the main path un-indented."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Follow a Standard - Introduction",
"maincontent": "One essential way to improve code quality is to follow a consistent style. That is why software engineers follow a strict coding standard (aka style guide).\n\nThe aim of a coding standard is to make the entire code base look like it was written by one person. A coding standard is usually specific to a programming language and specifies guidelines such as the locations of opening and closing braces, indentation styles and naming styles (e.g. whether to use Hungarian style, Pascal casing, Camel casing, etc.). It is important that the whole team/company uses the same coding standard and that the standard is generally not inconsistent with typical industry practices. If a company's coding standard is very different from what is typically used in the industry, new recruits will take longer to get used to the company's coding style.\n\n IDEs can help to enforce some parts of a coding standard e.g. indentation rules.\n\n"
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Follow a Standard - Basic",
"maincontent": "Go through the Java coding standard at @SE-EDU and learn the basic style rules.\n\n"
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Follow a Standard - Intermediate",
"maincontent": "Go through the Java coding standard at @SE-EDU and learn the intermediate style rules.\n\n"
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Name Well - Introduction",
"maincontent": "Proper naming improves the readability of code. It also reduces bugs caused by ambiguities regarding the intent of a variable or a method.\n\n"
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Name Well: \nBasic: Use Nouns for Things nd Verbs for Actions",
"maincontent": "Use nouns for classes/variables and verbs for methods/functions.\n\nDistinguish clearly between single-valued and multi-valued variables."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Name Well - Use Standard Words",
"maincontent": "Use correct spelling in names. Avoid 'texting-style' spelling. Avoid foreign language words, slang, and names that are only meaningful within specific contexts/times e.g. terms from private jokes, a TV show currently popular in your country."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Name Well - Use Name to Explain",
"maincontent": "A name is not just for differentiation; it should explain the named entity to the reader accurately and at a sufficient level of detail.\n\nIf a name has multiple words, they should be in a sensible order."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Name Well - Not too Long and Not too Short",
"maincontent": "While it is preferable not to have lengthy names, names that are 'too short' are even worse. If you must abbreviate or use acronyms, do it consistently. Explain their full meaning at an obvious location."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Name Well - Avoid Misleading Names",
"maincontent": "Related things should be named similarly, while unrelated things should NOT.\n\nExample: Consider these variables\n\ncolorBlack: hex value for color black\ncolorWhite: hex value for color white\ncolorBlue: number of times blue is used\nhexForRed: hex value for color red\nThis is misleading because colorBlue is named similar to colorWhite and colorBlack but has a different purpose while hexForRed is named differently but has a very similar purpose to the first two variables. The following is better:\n\nhexForBlack hexForWhite hexForRed\nblueColorCount\nAvoid misleading or ambiguous names (e.g. those with multiple meanings), similar sounding names, hard-to-pronounce ones (e.g. avoid ambiguities like \"is that a lowercase L, capital I or number 1?\", or \"is that number 0 or letter O?\"), almost similar names.\n\n"
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Avoid Unsafe Shortcuts - Introduction",
"maincontent": "It is safer to use language constructs in the way they are meant to be used, even if the language allows shortcuts. Such coding practices are common sources of bugs. Know them and avoid them."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Avoid Unsafe Shortcuts - Use the Default Branch",
"maincontent": "Always include a default branch in case statements.\n\nFurthermore, use it for the intended default action and not just to execute the last option. If there is no default action, you can use the default branch to detect errors (i.e. if execution reached the default branch, raise a suitable error). This also applies to the final else of an if-else construct. That is, the final else should mean 'everything else', and not the final option. Do not use else when an if condition can be explicitly specified, unless there is absolutely no other possibility."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Avoid Unsafe Shortcuts - Don't Recycle Variables or Parameters",
"maincontent": "Use one variable for one purpose. Do not reuse a variable for a different purpose other than its intended one, just because the data type is the same.\nDo not reuse formal parameters as local variables inside the method."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Avoid Unsafe Shortcuts - Avoid Empty Catch Blocks",
"maincontent": "Never write an empty catch statement. At least give a comment to explain why the catch block is left empty."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Avoid Unsafe Shortcuts - Delete Dead Code",
"maincontent": "You might feel reluctant to delete code you have painstakingly written, even if you have no use for that code anymore (\"I spent a lot of time writing that code; what if I need it again?\"). Consider all code as baggage you have to carry; get rid of unused code the moment it becomes redundant. If you need that code again, simply recover it from the revision control tool you are using. Deleting code you wrote previously is a sign that you are improving."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Avoid Unsafe Shortcuts - Minimise Scope of Variables",
"maincontent": "Minimize global variables. Global variables may be the most convenient way to pass information around, but they do create implicit links between code segments that use the global variable. Avoid them as much as possible.\n\nDefine variables in the least possible scope. For example, if the variable is used only within the if block of the conditional statement, it should be declared inside that if block.\n\nThe most powerful technique for minimizing the scope of a local variable is to declare it where it is first used. -- Effective Java, by Joshua Bloch\n\n Resources:\n\nRefactoring: Reduce Scope of Variable"
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Avoid Unsafe Shortcuts - Minimize Code Duplication",
"maincontent": "Code duplication, especially when you copy-paste-modify code, often indicates a poor quality implementation. While it may not be possible to have zero duplication, always think twice before duplicating code; most often there is a better alternative.\n\nThis guideline is closely related to the DRY Principle ."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Comment sufficiently but Minimally - Introduction",
"maincontent": "Some think commenting heavily increases the 'code quality'. That is not so. Avoid writing comments to explain bad code. Improve the code to make it self-explanatory."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Comment sufficiently but Minimally - Do Not Repeat the Obvious",
"maincontent": "If the code is self-explanatory, refrain from repeating the description in a comment just for the sake of 'good documentation'."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Comment sufficiently but Minimally - Write to the Reader",
"maincontent": "Do not write comments as if they are private notes to yourself. Instead, write them well enough to be understood by another programmer. One type of comment that is almost always useful is the header comment that you write for a class or an operation to explain its purpose."
},
{
"title": "Implementation",
"header": "Code Quality: Guideline to Comment sufficiently but Minimally - Explain WHAT and WHY, not HOW",
"maincontent": "Comments should explain the what and why aspects of the code, rather than the how aspect.\n\n What: The specification of what the code is supposed to do. The reader can compare such comments to the implementation to verify if the implementation is correct.\n\n Why: The rationale for the current implementation.\n\n How: The explanation for how the code works. This should already be apparent from the code, if the code is self-explanatory. Adding comments to explain the same thing is redundant."
},
{
"title": "Implementation",
"header": "Refactoring: What",
"maincontent": "The first version of the code you write may not be of production quality. It is OK to first concentrate on making the code work, rather than worry over the quality of the code, as long as you improve the quality later. This process of improving a program's internal structure in small steps without modifying its external behavior is called refactoring.\n\nRefactoring is not rewriting: Discarding poorly-written code entirely and re-writing it from scratch is not refactoring because refactoring needs to be done in small steps.\nRefactoring is not bug fixing: By definition, refactoring is different from bug fixing or any other modifications that alter the external behavior (e.g. adding a feature) of the component in concern.\n Improving code structure can have many secondary benefits: e.g.\n\nhidden bugs become easier to spot\nimprove performance (sometimes, simpler code runs faster than complex code because simpler code is easier for the compiler to optimize).\nGiven below are two common refactorings ( more).\n\nRefactoring Name: Consolidate Duplicate Conditional Fragments\n\nSituation: The same fragment of code is in all branches of a conditional expression.\n\nMethod: Move it outside of the expression.\n\nRefactoring Name: Extract Method\n\nSituation: You have a code fragment that can be grouped together.\n\nMethod: Turn the fragment into a method whose name explains the purpose of the method.\n\n Example:\n\nvoid printOwing() {\n printBanner();\n\n // print details\n System.out.println(\"name:\t\" + name);\n System.out.println(\"amount\t\" + getOutstanding());\n}\n\nvoid printOwing() {\n printBanner();\n printDetails(getOutstanding());\n}\n\nvoid printDetails(double outstanding) {\n System.out.println(\"name:\t\" + name);\n System.out.println(\"amount\t\" + outstanding);\n}\n Some IDEs have builtin support for basic refactorings such as automatically renaming a variable/method/class in all places it has been used.\n\n Refactoring, even if done with the aid of an IDE, may still result in regressions. Therefore, each small refactoring should be followed by regression testing.\n\n"
},
{
"title": "Implementation",
"header": "Refactoring: How",
"maincontent": "Given below are some more commonly used refactorings. A more comprehensive list is available at refactoring-catalog.\n\nConsolidate Conditional Expression\nDecompose Conditional\nInline Method\nRemove Double Negative\nReplace Magic Literal\nReplace Nested Conditional with Guard Clauses\nReplace Parameter with Explicit Methods\nReverse Conditional\nSplit Loop\nSplit Temporary Variable"
},
{
"title": "Implementation",
"header": "Refactoring: When",
"maincontent": "You know that it is important to refactor frequently so as to avoid the accumulation of ‘messy’ code which might get out of control. But how much refactoring is too much refactoring? It is too much refactoring when the benefits no longer justify the cost. The costs and the benefits depend on the context. That is why some refactorings are ‘opposites’ of each other (e.g. extract method vs inline method).\n\n"
},
{
"title": "Implementation",
"header": "Documentation: Introduction - What",
"maincontent": "Developer-to-developer documentation can be in one of two forms:\n\nDocumentation for developer-as-user: Software components are written by developers and reused by other developers, which means there is a need to document how such components are to be used. Such documentation can take several forms:\nAPI documentation: APIs expose functionality in small-sized, independent and easy-to-use chunks, each of which can be documented systematically.\nTutorial-style instructional documentation: In addition to explaining functions/methods independently, some higher-level explanations of how to use an API can be useful.\n Example of API Documentation: String API.\n Example of tutorial-style documentation: Java Internationalization Tutorial.\nDocumentation for developer-as-maintainer: There is a need to document how a system or a component is designed, implemented and tested so that other developers can maintain and evolve the code. Writing documentation of this type is harder because of the need to explain complex internal details. However, given that readers of this type of documentation usually have access to the source code itself, only some information needs to be included in the documentation, as code (and code comments) can also serve as a complementary source of information.\n An example: se-edu/addressbook-level4 Developer Guide.\n\nSoftware documentation (applies to both user-facing and developer-facing) is best kept in a text format for ease of version tracking. A writer-friendly source format is also desirable as non-programmers (e.g., technical writers) may need to author/edit such documents. As a result, formats such as Markdown, AsciiDoc, and PlantUML are often used for software documentation.\n\n"
},
{
"title": "Implementation",
"header": "Documentation: Guidelines - Go top-down, not bottom-up: What",
"maincontent": "When writing project documents, a top-down breadth-first explanation is easier to understand than a bottom-up one."
},
{
"title": "Implementation",
"header": "Documentation: Guidelines - Go top-down, not bottom-up: Why",
"maincontent": "The main advantage of the top-down approach is that the document is structured like an upside down tree (root at the top) and the reader can travel down a path she is interested in until she reaches the component she is interested to learn in-depth, without having to read the entire document or understand the whole system."
},
{
"title": "Implementation",
"header": "Documentation: Guidelines - Go top-down, not bottom-up: How",
"maincontent": " To explain a system called SystemFoo with two sub-systems, FrontEnd and BackEnd, start by describing the system at the highest level of abstraction, and progressively drill down to lower level details. An outline for such a description is given below.\n\n[First, explain what the system is, in a black-box fashion (no internal details, only the external view).]\n\nSystemFoo is a ....\n\n[Next, explain the high-level architecture of SystemFoo, referring to its major components only.]\n\nSystemFoo consists of two major components: FrontEnd and BackEnd.\n\nThe job of FrontEnd is to ... while the job of BackEnd is to ...\n\nAnd this is how FrontEnd and BackEnd work together ...\n\n[Now you can drill down to FrontEnd's details.]\n\nFrontEnd consists of three major components: A, B, C\n\nA's job is to ...\nB's job is to...\nC's job is to...\n\nAnd this is how the three components work together ...\n\n[At this point, further drill down to the internal workings of each component. A reader who is not interested in knowing the nitty-gritty details can skip ahead to the section on BackEnd.]\n\nIn-depth description of A\n\nIn-depth description of B\n\n...\n\n[At this point drill down to the details of the BackEnd.]\n\n..."
},
{
"title": "Implementation",
"header": "Documentation: Guidelines - Aim for comprehensibility: What",
"maincontent": "Technical documents exist to help others understand technical details. Therefore, it is not enough for the documentation to be accurate and comprehensive; it should also be comprehensible."
},
{
"title": "Implementation",
"header": "Documentation: Guidelines - Aim for comprehensibility: How",
"maincontent": "Here are some tips on writing effective documentation.\n\nUse plenty of diagrams: It is not enough to explain something in words; complement it with visual illustrations (e.g. a UML diagram).\nUse plenty of examples: When explaining algorithms, show a running example to illustrate each step of the algorithm, in parallel to worded explanations.\nUse simple and direct explanations: Convoluted explanations and fancy words will annoy readers. Avoid long sentences.\nGet rid of statements that do not add value: For example, 'We made sure our system works perfectly' (who didn't?), 'Component X has its own responsibilities' (of course it has!).\nIt is not a good idea to have separate sections for each type of artifact, such as 'use cases', 'sequence diagrams', 'activity diagrams', etc. Such a structure, coupled with the indiscriminate inclusion of diagrams without justifying their need, indicates a failure to understand the purpose of documentation. Include diagrams when they are needed to explain something. If you want to provide additional diagrams for completeness' sake, include them in the appendix as a reference.\n"
},
{
"title": "Implementation",
"header": "Documentation: Guidelines - Document minimally, but sufficiently: What",
"maincontent": "Aim for 'just enough' developer documentation.\n\nWriting and maintaining developer documents is an overhead. You should try to minimize that overhead.\nIf the readers are developers who will eventually read the code, the documentation should complement the code and should provide only just enough guidance to get started."
},
{
"title": "Implementation",
"header": "Documentation: Guidelines - Document minimally, but sufficiently: How",
"maincontent": "Anything that is already clear in the code need not be described in words. Instead, focus on providing higher level information that is not readily visible in the code or comments.\n\nRefrain from duplicating chunks of text. When describing several similar algorithms/designs/APIs, etc., do not simply duplicate large chunks of text. Instead, describe the similarities in one place and emphasize only the differences in other places. It is very annoying to see pages and pages of similar text without any indication as to how they differ from each other."
},
{
"title": "Implementation",
"header": "Documentation: Tools - JavaDoc: What",
"maincontent": "JavaDoc is a tool for generating API documentation in HTML format from comments in the source code. In addition, modern IDEs use JavaDoc comments to generate explanatory tooltips.\n\n An example method header comment in JavaDoc format (adapted from Oracle's Java documentation)\n\n/**\n * Returns an Image object that can then be painted on the screen.\n * The url argument must specify an absolute {@link URL}. The name\n * argument is a specifier that is relative to the url argument.\n * <p>\n * This method always returns immediately, whether or not the\n * image exists. When this applet attempts to draw the image on\n * the screen, the data will be loaded. The graphics primitives\n * that draw the image will incrementally paint on the screen.\n *\n * @param url an absolute URL giving the base location of the image\n * @param name the location of the image, relative to the url argument\n * @return the image at the specified URL\n * @see Image\n */\npublic Image getImage(URL url, String name) {\n try {\n return getImage(new URL(url, name));\n } catch (MalformedURLException e) {\n return null;\n }\n}"
},
{
"title": "Implementation",
"header": "Documentation: Tools - JavaDoc: How",
"maincontent": "In the absence of more extensive guidelines (e.g., given in a coding standard adopted by your project), you can follow the two examples below in your code.\n\nA minimal JavaDoc comment example for methods:\n\n/**\n * Returns lateral location of the specified position.\n * If the position is unset, NaN is returned.\n *\n * @param x X coordinate of position.\n * @param y Y coordinate of position.\n * @param zone Zone of position.\n * @return Lateral location.\n * @throws IllegalArgumentException If zone is <= 0.\n */\npublic double computeLocation(double x, double y, int zone)\n throws IllegalArgumentException {\n // ...\n}\n\nA minimal JavaDoc comment example for classes:\n\npackage ...\n\nimport ...\n\n/**\n * Represents a location in a 2D space. A <code>Point</code> object corresponds to\n * a coordinate represented by two integers e.g., <code>3,6</code>\n */\npublic class Point {\n // ...\n}"
},
{
"title": "Implementation",
"header": "Error Handling: Introduction",
"maincontent": "Well-written applications include error-handling code that allows them to recover gracefully from unexpected errors. When an error occurs, the application may need to request user intervention, or it may be able to recover on its own. In extreme cases, the application may log the user off or shut down the system. -- Microsoft"
},
{
"title": "Implementation",
"header": "Error Handling: Exceptions - What",
"maincontent": "Exceptions are used to deal with 'unusual' but not entirely unexpected situations that the program might encounter at runtime.\n\nException:\n\nThe term exception is shorthand for the phrase \"exceptional event.\" An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions. –- Java Tutorial (Oracle Inc.)\n\n Examples:\n\nA network connection encounters a timeout due to a slow server.\nThe code tries to read a file from the hard disk but the file is corrupted and cannot be read."
},
{
"title": "Implementation",
"header": "Error Handling: Exceptions - How",
"maincontent": "Most languages allow code that encountered an \"exceptional\" situation to encapsulate details of the situation in an Exception object and throw/raise that object so that another piece of code can catch it and deal with it. This is especially useful when the code that encountered the unusual situation does not know how to deal with it.\n\nThe extract below from the -- Java Tutorial (with slight adaptations) explains how exceptions are typically handled.\n\nWhen an error occurs at some point in the execution, the code being executed creates an exception object and hands it off to the runtime system. The exception object contains information about the error, including its type and the state of the program when the error occurred. Creating an exception object and handing it to the runtime system is called throwing an exception.\n\nAfter a method throws an exception, the runtime system attempts to find something to handle it in the call stack. The runtime system searches the call stack for a method that contains a block of code that can handle the exception. This block of code is called an exception handler. The search begins with the method in which the error occurred and proceeds through the call stack in the reverse order in which the methods were called. When an appropriate handler is found, the runtime system passes the exception to the handler. An exception handler is considered appropriate if the type of the exception object thrown matches the type that can be handled by the handler.\n\nThe exception handler chosen is said to catch the exception. If the runtime system exhaustively searches all the methods on the call stack without finding an appropriate exception handler, the program terminates.\n\nAdvantages of exception handling in this way:\n\nThe ability to propagate error information through the call stack.\nThe separation of code that deals with 'unusual' situations from the code that does the 'usual' work.\n"
},
{
"title": "Implementation",
"header": "Error Handling: Exceptions - When",
"maincontent": "In general, use exceptions only for 'unusual' conditions. Use normal return statements to pass control to the caller for conditions that are 'normal'."
},
{
"title": "Implementation",
"header": "Error Handling: Assertions - What",
"maincontent": "Assertions are used to define assumptions about the program state so that the runtime can verify them. An assertion failure indicates a possible bug in the code because the code has resulted in a program state that violates an assumption about how the code should behave.\n\n An assertion can be used to express something like when the execution comes to this point, the variable v cannot be null.\n\nIf the runtime detects an assertion failure, it typically takes some drastic action such as terminating the execution with an error message. This is because an assertion failure indicates a possible bug and the sooner the execution stops, the safer it is.\n\n In the Java code below, suppose you set an assertion that timeout returned by Config.getTimeout() is greater than 0. Now, if Config.getTimeout() returns -1 in a specific execution of this line, the runtime can detect it as an assertion failure -- i.e. an assumption about the expected behavior of the code turned out to be wrong which could potentially be the result of a bug -- and take some drastic action such as terminating the execution.\n\nint timeout = Config.getTimeout(); "
},
{
"title": "Implementation",
"header": "Error Handling: Assertions - How",
"maincontent": "Use the assert keyword to define assertions.\n\n This assertion will fail with the message x should be 0 if x is not 0 at this point.\n\nx = getX();\nassert x == 0 : \"x should be 0\";\n...\nAssertions can be disabled without modifying the code.\n\n java -enableassertions HelloWorld (or java -ea HelloWorld) will run HelloWorld with assertions enabled while java -disableassertions HelloWorld will run it without verifying assertions.\n\nJava disables assertions by default. This could create a situation where you think all assertions are being verified as true while in fact they are not being verified at all. Therefore, remember to enable assertions when you run the program if you want them to be in effect.\n\n Enable assertions in Intellij (how?) and get an assertion to fail temporarily (e.g. insert an assert false into the code temporarily) to confirm assertions are being verified.\n\n Java assert vs JUnit assertions: They are similar in purpose but JUnit assertions are more powerful and customized for testing. In addition, JUnit assertions are not disabled by default. We recommend you use JUnit assertions in test code and Java assert in functional code.\n\n Resources\n\nTutorials:\n\nJava Assertions -- a simple tutorial from javatpoint.com\nProgramming with Assertions (first half) -- a more detailed tutorial from Oracle\nBest practices:\n\nProgramming with Assertions (second half) -- from Oracle (also listed above as a tutorial) contains some best practices towards the end of the article."
},
{
"title": "Implementation",
"header": "Error Handling: Assertions - When",
"maincontent": "It is recommended that assertions be used liberally in the code. Their impact on performance is considered low and worth the additional safety they provide.\n\nDo not use assertions to do work because assertions can be disabled. If not, your program will stop working when assertions are not enabled.\n\n The code below will not invoke the writeFile() method when assertions are disabled. If that method is performing some work that is necessary for your program, your program will not work correctly when assertions are disabled.\n\n...\nassert writeFile() : \"File writing is supposed to return true\";\nAssertions are suitable for verifying assumptions about Internal Invariants, Control-Flow Invariants, Preconditions, Postconditions, and Class Invariants. Refer to [Programming with Assertions (second half)] to learn more.\n\nExceptions and assertions are two complementary ways of handling errors in software but they serve different purposes. Therefore, both assertions and exceptions should be used in code.\n\nThe raising of an exception indicates an unusual condition created by the user (e.g. user inputs an unacceptable input) or the environment (e.g., a file needed for the program is missing).\nAn assertion failure indicates the programmer made a mistake in the code (e.g., a null value is returned from a method that is not supposed to return null under any circumstances).\n"
},
{
"title": "Implementation",
"header": "Error Handling: Logging - What",
"maincontent": "Logging is the deliberate recording of certain information during a program execution for future reference. Logs are typically written to a log file but it is also possible to log information in other ways e.g. into a database or a remote server.\n\nLogging can be useful for troubleshooting problems. A good logging system records some system information regularly. When bad things happen to a system e.g. an unanticipated failure, their associated log files may provide indications of what went wrong and actions can then be taken to prevent it from happening again.\n\n A log file is like the black box of an airplane; they don't prevent problems but they can be helpful in understanding what went wrong after the fact."
},
{
"title": "Implementation",
"header": "Error Handling: Logging - How",
"maincontent": "Most programming environments come with logging systems that allow sophisticated forms of logging. They have features such as the ability to enable and disable logging easily or to change the logging intensity.\n\n This sample Java code uses Java’s default logging mechanism.\n\nFirst, import the relevant Java package:\n\nimport java.util.logging.*;\nNext, create a Logger:\n\nprivate static Logger logger = Logger.getLogger(\"Foo\");\nNow, you can use the Logger object to log information. Note the use of a logging level for each message. When running the code, the logging level can be set to WARNING so that log messages specified as having INFO level (which is a lower level than WARNING) will not be written to the log file at all.\n\n// log a message at INFO level\nlogger.log(Level.INFO, \"going to start processing\");\n// ...\nprocessInput();\nif (error) {\n // log a message at WARNING level\n logger.log(Level.WARNING, \"processing error\", ex);\n}\n// ...\nlogger.log(Level.INFO, \"end of processing\");"
},
{
"title": "Implementation",
"header": "Error Handling: Defensive Programming - What",
"maincontent": "A defensive programmer codes under the assumption \"if you leave room for things to go wrong, they will go wrong\". Therefore, a defensive programmer proactively tries to eliminate any room for things to go wrong.\n\n Consider a method MainApp#getConfig() that returns a Config object containing configuration data. A typical implementation is given below:\n\nclass MainApp {\n Config config;\n \n /** Returns the config object */\n Config getConfig() {\n return config;\n }\n}\nIf the returned Config object is not meant to be modified, a defensive programmer might use a more defensive implementation given below. This is more defensive because even if the returned Config object is modified (although it is not meant to be), it will not affect the config object inside the MainApp object.\n\n /** Returns a copy of the config object */\n Config getConfig() {\n return config.copy(); // return a defensive copy\n }"
},
{
"title": "Implementation",
"header": "Error Handling: Defensive Programming - Enforcing 1-to-1 Associations",
"maincontent": "Consider the association given below. A defensive implementation requires us to ensure that a MinedCell cannot exist without a Mine and vice versa which requires simultaneous object creation. However, Java can only create one object at a time. Given below are two alternative implementations, both of which violate the multiplicity for a short period of time.\n\n\nOption 1:\n\nclass MinedCell {\n private Mine mine;\n\n public MinedCell(Mine m) {\n if (m == null) {\n showError();\n }\n mine = m;\n }\n …\n}\nOption 1 forces us to keep a Mine without a MinedCell (until the MinedCell is created).\n\nOption 2:\n\nclass MinedCell {\n private Mine mine;\n\n public MinedCell() {\n mine = new Mine();\n }\n …\n}\nOption 2 is more defensive because the Mine is immediately linked to a MinedCell."
},
{
"title": "Implementation",
"header": "Error Handling: Defensive Programming - Enforcing Compulsory Associations",
"maincontent": "Consider two classes, Account and Guarantor, with an association as shown in the following diagram:\n\nExample:\n\n\nHere, the association is compulsory i.e. an Account object should always be linked to a Guarantor. One way to implement this is to simply use a reference variable, like this:\n\nclass Account {\n Guarantor guarantor;\n\n void setGuarantor(Guarantor g) {\n guarantor = g;\n }\n}\nHowever, what if someone else used the Account class like this?\n\nAccount a = new Account();\na.setGuarantor(null);\nThis results in an Account without a Guarantor! In a real banking system, this could have serious consequences! The code here did not try to prevent such a thing from happening. You can make the code more defensive by proactively enforcing the multiplicity constraint, like this:\n\nclass Account {\n private Guarantor guarantor;\n\n public Account(Guarantor g) {\n if (g == null) {\n stopSystemWithMessage(\"multiplicity violated. Null Guarantor\");\n }\n guarantor = g;\n }\n public void setGuarantor(Guarantor g) {\n if (g == null) {\n stopSystemWithMessage(\"multiplicity violated. Null Guarantor\");\n }\n guarantor = g;\n }\n …\n}"
},
{
"title": "Implementation",
"header": "Error Handling: Defensive Programming - Enforcing Referential Integrity",
"maincontent": "A bidirectional association in the design (shown in (a)) is usually emulated at code level using two variables (as shown in (b)).\n\n\nclass Man {\n Woman girlfriend;\n\n void setGirlfriend(Woman w) {\n girlfriend = w;\n }\n …\n}\nclass Woman {\n Man boyfriend;\n\n void setBoyfriend(Man m) {\n boyfriend = m;\n }\n}\nThe two classes are meant to be used as follows:\n\nWoman jean;\nMan james;\n…\njames.setGirlfriend(jean);\njean.setBoyfriend(james);\nSuppose the two classes were used like this instead:\n\nWoman jean;\nMan james, yong;\n…\njames.setGirlfriend(jean); \njean.setBoyfriend(yong); \nNow James' girlfriend is Jean, while Jean's boyfriend is not James. This situation is a result of the code not being defensive enough to stop this \"love triangle\". In such a situation, you could say that the referential integrity has been violated. This means that there is an inconsistency in object references.\n\n\nOne way to prevent this situation is to implement the two classes as shown below. Note how the referential integrity is maintained.\n\npublic class Woman {\n private Man boyfriend;\n\n public void setBoyfriend(Man m) {\n if (boyfriend == m) {\n return;\n }\n if (boyfriend != null) {\n boyfriend.breakUp();\n }\n boyfriend = m;\n m.setGirlfriend(this);\n }\n\n public void breakUp() {\n boyfriend = null;\n } \n ...\n}\npublic class Man {\n private Woman girlfriend;\n\n public void setGirlfriend(Woman w) {\n if (girlfriend == w) {\n return;\n }\n if (girlfriend != null) {\n girlfriend.breakUp();\n }\n girlfriend = w;\n w.setBoyfriend(this);\n }\n public void breakUp() {\n girlfriend = null;\n } \n ...\n}\nWhen james.setGirlfriend(jean) is executed, the code ensures that james breaks up with any current girlfriend before he accepts jean as his girlfriend. Furthermore, the code ensures that jean breaks up with any existing boyfriends before accepting james as her boyfriend.\n\n"
},
{
"title": "Implementation",
"header": "Error Handling: Defensive Programming - When",
"maincontent": "It is not necessary to be 100% defensive all the time. While defensive code may be less prone to be misused or abused, such code can also be more complicated and slower to run.\n\nThe suitable degree of defensiveness depends on many factors such as:\n\nHow critical is the system?\nWill the code be used by programmers other than the author?\nThe level of programming language support for defensive programming\nThe overhead of being defensive\n"
},
{
"title": "Implementation",
"header": "Error Handling: Design-by-contract approach - Design by Contract",
"maincontent": "Design by contract (DbC) is an approach for designing software that requires defining formal, precise and verifiable interface specifications for software components.\n\nSuppose an operation is implemented with the behavior specified precisely in the API (preconditions, post conditions, exceptions etc.). When following the defensive approach, the code should first check if the preconditions have been met. Typically, exceptions are thrown if preconditions are violated. In contrast, the Design-by-Contract (DbC) approach to coding assumes that it is the responsibility of the caller to ensure all preconditions are met. The operation will honor the contract only if the preconditions have been met. If any of them have not been met, the behavior of the operation is \"unspecified\".\n\nLanguages such as Eiffel have native support for DbC. For example, preconditions of an operation can be specified in Eiffel and the language runtime will check precondition violations without the need to do it explicitly in the code. To follow the DbC approach in languages such as Java and C++ where there is no built-in DbC support, assertions can be used to confirm pre-conditions.\n\n"
},
{
"title": "Implementation",
"header": "Integration: Introduction",
"maincontent": "Combining parts of a software product to form a whole is called integration. It is also one of the most troublesome tasks and it rarely goes smoothly."
},
{
"title": "Implementation",
"header": "Integration: Approaches - 'Late and One Time' vs 'Early and Frequent'",
"maincontent": "In terms of timing and frequency, there are two general approaches to integration: late and one-time, early and frequent.\n\nLate and one-time: wait till all components are completed and integrate all finished components near the end of the project.\n\nThis approach is not recommended because integration often causes many component incompatibilities (due to previous miscommunications and misunderstandings) to surface which can lead to delivery delays i.e. Late integration → incompatibilities found → major rework required → cannot meet the delivery date.\n\nEarly and frequent: integrate early and evolve each part in parallel, in small steps, re-integrating frequently.\n\n A walking skeleton can be written first. This can be done by one developer, possibly the one in charge of integration. After that, all developers can flesh out the skeleton in parallel, adding one feature at a time. After each feature is done, simply integrate the new code into the main system."
},
{
"title": "Implementation",
"header": "Integration: Approaches - Big-Bang vs Incremental Integration",
"maincontent": "Big-bang integration: integrate all components at the same time.\n\nBig-bang is not recommended because it will uncover too many problems at the same time which could make debugging and bug-fixing more complex than when problems are uncovered incrementally.\n\nIncremental integration: integrate a few components at a time. This approach is better than big-bang integration because it surfaces integration problems in a more manageable way."
},
{
"title": "Implementation",
"header": "Integration: Approaches - Top-Down vs Bottom-Up Integration",
"maincontent": "Based on the order in which components are integrated, incremental integration can be done in three ways.\n\nTop-down integration: higher-level components are integrated before bringing in the lower-level components. One advantage of this approach is that higher-level problems can be discovered early. One disadvantage is that this requires the use of stubs in place of lower level components until the real lower-level components are integrated into the system. Otherwise, higher-level components cannot function as they depend on lower level ones.\n\nBottom-up integration: the reverse of top-down integration. Note that when integrating lower level components, drivers may be needed to test the integrated components because the UI may not be integrated yet, just like how top-down integration needs stubs.\n\nSandwich integration: a mix of the top-down and bottom-up approaches. The idea is to do both top-down and bottom-up so as to 'meet' in the middle."
},
{
"title": "Implementation",
"header": "Integration: Build automation - What",
"maincontent": "Build automation tools automate the steps of the build process, usually by means of build scripts.\n\nIn a non-trivial project, building a product from its source code can be a complex multi-step process. For example, it can include steps such as: pull code from the revision control system, compile, link, run automated tests, automatically update release documents (e.g. build number), package into a distributable, push to repo, deploy to a server, delete temporary files created during building/testing, email developers of the new build, and so on. Furthermore, this build process can be done ‘on demand’, it can be scheduled (e.g. every day at midnight) or it can be triggered by various events (e.g. triggered by a code push to the revision control system).\n\nSome of these build steps such as compiling, linking and packaging, are already automated in most modern IDEs. For example, several steps happen automatically when the ‘build’ button of the IDE is clicked. Some IDEs even allow customization of this build process to some extent.\n\nHowever, most big projects use specialized build tools to automate complex build processes.\n\n Some popular build tools relevant to Java developers: Gradle, Maven, Apache Ant, GNU Make\n\n Some other build tools: Grunt (JavaScript), Rake (Ruby)\n\nSome build tools also serve as dependency management tools. Modern software projects often depend on third party libraries that evolve constantly. That means developers need to download the correct version of the required libraries and update them regularly. Therefore, dependency management is an important part of build automation. Dependency management tools can automate that aspect of a project.\n\n Maven and Gradle, in addition to managing the build process, can play the role of dependency management tools too."
},
{
"title": "Implementation",
"header": "Integration: Build automation - Continuous Integration and Continuous Deployment",
"maincontent": "An extreme application of build automation is called continuous integration (CI) in which integration, building, and testing happens automatically after each code change.\n\nA natural extension of CI is Continuous Deployment (CD) where the changes are not only integrated continuously, but also deployed to end-users at the same time.\n\n Some examples of CI/CD tools: Travis, Jenkins, Appveyor, CircleCI, GitHub Actions"
},
{
"title": "Implementation",
"header": "Reuse: Introduction - What",
"maincontent": "Reuse is a major theme in software engineering practices. By reusing tried-and-tested components, the robustness of a new software system can be enhanced while reducing the manpower and time requirement. Reusable components come in many forms; it can be reusing a piece of code, a subsystem, or a whole software."
},
{
"title": "Implementation",
"header": "Reuse: Introduction - When",
"maincontent": "While you may be tempted to use many libraries/frameworks/platforms that seem to crop up on a regular basis and promise to bring great benefits, note that there are costs associated with reuse. Here are some:\n\nThe reused code may be an overkill (think using a sledgehammer to crack a nut), increasing the size of, and/or degrading the performance of, your software.\nThe reused software may not be mature/stable enough to be used in an important product. That means the software can change drastically and rapidly, possibly in ways that break your software.\nNon-mature software has the risk of dying off as fast as they emerged, leaving you with a dependency that is no longer maintained.\nThe license of the reused software (or its dependencies) restrict how you can use/develop your software.\nThe reused software might have bugs, missing features, or security vulnerabilities that are important to your product, but not so important to the maintainers of that software, which means those flaws will not get fixed as fast as you need them to.\nMalicious code can sneak into your product via compromised dependencies.\n"
},
{
"title": "Implementation",
"header": "Reuse: APIs - What",
"maincontent": "An Application Programming Interface (API) specifies the interface through which other programs can interact with a software component. It is a contract between the component and its clients.\n\n A class has an API (e.g., API of the Java String class, API of the Python str class) which is a collection of public methods that you can invoke to make use of the class.\n\n The GitHub API is a collection of web request formats that the GitHub server accepts and their corresponding responses. You can write a program that interacts with GitHub through that API.\n\nWhen developing large systems, if you define the API of each component early, the development team can develop the components in parallel because the future behavior of the other components are now more predictable.\n\n"
},
{
"title": "Implementation",
"header": "Reuse: APIs - Designing APIs",
"maincontent": "An API should be well-designed (i.e. should cater for the needs of its users) and well-documented.\n\nWhen you write software consisting of multiple components, you need to define the API of each component.\n\nOne approach is to let the API emerge and evolve over time as you write code.\n\nAnother approach is to define the API up-front. Doing so allows us to develop the components in parallel.\n\nYou can use UML sequence diagrams to analyze the required interactions between components in order to discover the required API.\n\nAs you analyze the interactions between components using sequence diagrams, you discover the API of those components. For example, the diagram above tells us that the MSLogic component API should have the methods:\n\nnew()\ngetWidth:int\ngetHeight():int\ngetRemainingMineCount():int\nMore details can be included to increase the precision of the method definitions before coding. Such precision is important to avoid misunderstandings between the developer of the class and developers of other classes that interact with the class.\n\nOperation: newGame(): void\nDescription: Generates a new WxH minefield with M mines. Any existing minefield will be overwritten.\nPreconditions: None\nPostconditions: A new minefield is created. Game state is READY.\nPreconditions are the conditions that must be true before calling this operation. Postconditions describe the system after the operation is complete. Note that postconditions do not say what happens during the operation. Here is another example:\n\nOperation: clearCellAt(int x, int y): void\nDescription: Records the cell at x, y as cleared.\nParameters: x, y coordinates of the cell\nPreconditions: game state is READY or IN_PLAY. x and y are in 0..(H-1) and 0..(W-1), respectively.\nPostconditions: Cell at x, y changes state to ZERO, ONE, TWO, THREE, …, EIGHT, or INCORRECTLY_CLEARED. Game state changes to IN_PLAY, WON or LOST as appropriate."
},
{
"title": "Implementation",
"header": "Reuse: Libraries - What",
"maincontent": "A library is a collection of modular code that is general and can be used by other programs.\n\n Java classes you get with the JDK (such as String, ArrayList, HashMap, etc.) are library classes that are provided in the default Java distribution.\n\n Natty is a Java library that can be used for parsing strings that represent dates e.g. The 31st of April in the year 2008"
},
{
"title": "Implementation",
"header": "Reuse: Libraries - How",
"maincontent": "These are the typical steps required to use a library:\n\nRead the documentation to confirm that its functionality fits your needs.\nCheck the license to confirm that it allows reuse in the way you plan to reuse it. For example, some libraries might allow non-commercial use only.\nDownload the library and make it accessible to your project. Alternatively, you can configure your dependency management tool to do it for you.\nCall the library API from your code where you need to use the library's functionality."
},
{
"title": "Implementation",
"header": "Reuse: Framework - What",
"maincontent": "The overall structure and execution flow of a specific category of software systems can be very similar. The similarity is an opportunity to reuse at a high scale.\n\n Running example:\n\nIDEs for different programming languages are similar in how they support editing code, organizing project files, debugging, etc.\n\nA software framework is a reusable implementation of a software (or part thereof) providing generic functionality that can be selectively customized to produce a specific application.\n\n Running example:\n\nEclipse is an IDE framework that can be used to create IDEs for different programming languages.\n\nSome frameworks provide a complete implementation of a default behavior which makes them immediately usable.\n\n Running example:\n\nEclipse is a fully functional Java IDE out-of-the-box.\n\nA framework facilitates the adaptation and customization of some desired functionality.\n\n Running example:\n\nThe Eclipse plugin system can be used to create an IDE for different programming languages while reusing most of the existing IDE features of Eclipse.\n\nE.g. https://marketplace.eclipse.org/content/pydev-python-ide-eclipse\n\nSome frameworks cover only a specific component or an aspect.\n\n JavaFX is a framework for creating Java GUIs. Tkinter is a GUI framework for Python.\n\n More examples of frameworks\n\nFrameworks for web-based applications: Drupal (PHP), Django (Python), Ruby on Rails (Ruby), Spring (Java)\nFrameworks for testing: JUnit (Java), unittest (Python), Jest (JavaScript)"
},
{
"title": "Implementation",
"header": "Reuse: Framework - Framework vs Library",
"maincontent": "Although both frameworks and libraries are reuse mechanisms, there are notable differences:\n\nLibraries are meant to be used ‘as is’ while frameworks are meant to be customized/extended. e.g., writing plugins for Eclipse so that it can be used as an IDE for different languages (C++, PHP, etc.), adding modules and themes to Drupal, and adding test cases to JUnit.\n\nYour code calls the library code while the framework code calls your code. Frameworks use a technique called inversion of control, aka the “Hollywood principle” (i.e. don’t call us, we’ll call you!). That is, you write code that will be called by the framework, e.g. writing test methods that will be called by the JUnit framework. In the case of libraries, your code calls libraries.\n\n"
},
{
"title": "Implementation",
"header": "Reuse: Framework - Platforms",
"maincontent": "A platform provides a runtime environment for applications. A platform is often bundled with various libraries, tools, frameworks, and technologies in addition to a runtime environment but the defining characteristic of a software platform is the presence of a runtime environment.\n\n Technically, an operating system can be called a platform. For example, Windows PC is a platform for desktop applications while iOS is a platform for mobile applications.\n\n Two well-known examples of platforms are JavaEE and .NET, both of which sit above the operating systems layer, and are used to develop enterprise applications. Infrastructure services such as connection pooling, load balancing, remote code execution, transaction management, authentication, security, messaging etc. are done similarly in most enterprise applications. Both JavaEE and .NET provide these services to applications in a customizable way without developers having to implement them from scratch every time.\n\nJavaEE (Java Enterprise Edition) is both a framework and a platform for writing enterprise applications. The runtime used by JavaEE applications is the JVM (Java Virtual Machine) that can run on different Operating Systems.\n.NET is a similar platform and framework. Its runtime is called CLR (Common Language Runtime) and it is usually used on Windows machines."
},
{
"title": "Implementation",
"header": "Reuse: Cloud Computing - What",
"maincontent": "Cloud computing is the delivery of computing as a service over the network, rather than a product running on a local machine. This means the actual hardware and software is located at a remote location, typically, at a large server farm, while users access them over the network. Maintenance of the hardware and software is managed by the cloud provider while users typically pay for only the amount of services they use. This model is similar to the consumption of electricity; the power company manages the power plant, while the consumers pay them only for the electricity used. The cloud computing model optimizes hardware and software utilization and reduces the cost to consumers. Furthermore, users can scale up/down their utilization at will without having to upgrade their hardware and software. The traditional non-cloud model of computing is similar to everyone buying their own generators to create electricity for their own use."
},
{
"title": "Implementation",
"header": "Reuse: Cloud Computing - Iaas, PaaS and SaaS",
"maincontent": "Cloud computing can deliver computing services at three levels:\n\nInfrastructure as a service (IaaS) delivers computer infrastructure as a service. For example, a user can deploy virtual servers on the cloud instead of buying physical hardware and installing server software on them. Another example would be a customer using storage space on the cloud for off-site storage of data. Rackspace is an example of an IaaS cloud provider. Amazon Elastic Compute Cloud (Amazon EC2) is another one.\n\nPlatform as a service (PaaS) provides a platform on which developers can build applications. Developers do not have to worry about infrastructure issues such as deploying servers or load balancing as is required when using IaaS. Those aspects are automatically taken care of by the platform. The price to pay is reduced flexibility; applications written on PaaS are limited to facilities provided by the platform. A PaaS example is the Google App Engine where developers can build applications using Java, Python, PHP, or Go whereas Amazon EC2 allows users to deploy applications written in any language on their virtual servers.\n\nSoftware as a service (SaaS) allows applications to be accessed over the network instead of installing them on a local machine. For example, Google Docs is a SaaS word processing software, while Microsoft Word is a traditional word processing software.\n\n"
},
{
"title": "Quality assurance",
"header": "What",
"maincontent": "Software Quality Assurance (QA) is the process of ensuring that the software being built has the required levels of quality.\n\nWhile testing is the most common activity used in QA, there are other complementary techniques such as static analysis, code reviews, and formal verification."
},
{
"title": "Quality assurance",
"header": "Validation vs Verification",
"maincontent": "Quality Assurance = Validation + Verification\n\nQA involves checking two aspects:\n\n1.Validation: are you building the right system i.e., are the requirements correct?\n2.Verification: are you building the system right i.e., are the requirements implemented correctly?\n\nWhether something belongs under validation or verification is not that important. What is more important is that both are done, instead of limiting to only verification (i.e., remember that the requirements can be wrong too)."
},
{
"title": "Quality assurance",
"header": "Code Reviews",
"maincontent": "Reviews can be done in various forms. Some examples below:\n\nPull Request reviews\nProject Management Platforms such as GitHub and BitBucket allow the new code to be proposed as Pull Requests and provide the ability for others to review the code in the PR.\n\nIn pair programming\nAs pair programming involves two programmers working on the same code at the same time, there is an implicit review of the code by the other member of the pair.\n\nFormal inspections\nInspections involve a group of people systematically examining project artifacts to discover defects. Members of the inspection team play various roles during the process, such as:\n\nthe author - the creator of the artifact\nthe moderator - the planner and executor of the inspection meeting\nthe secretary - the recorder of the findings of the inspection\nthe inspector/reviewer - the one who inspects/reviews the artifact\n\nAdvantages of code review over testing:\nIt can detect functionality defects as well as other problems such as coding standard violations.\nIt can verify non-code artifacts and incomplete code.\nIt does not require test drivers or stubs.\n\nDisadvantages:\nIt is a manual process and therefore, error prone."
},
{
"title": "Quality assurance",
"header": "Static Analysis",
"maincontent": "Static analysis of code can find useful information such as unused variables, unhandled exceptions, style errors, and statistics. Most modern IDEs come with some inbuilt static analysis capabilities. For example, an IDE can highlight unused variables as you type the code into the editor.\n\nThe term static in static analysis refers to the fact that the code is analyzed without executing the code. In contrast, dynamic analysis requires the code to be executed to gather additional information about the code e.g., performance characteristics.\n\nHigher-end static analysis tools (static analyzers) can perform more complex analysis such as locating potential bugs, memory leaks, inefficient code structures, etc.\n\nLinters are a subset of static analyzers that specifically aim to locate areas where the code can be made 'cleaner'."
},
{
"title": "Quality assurance",
"header": "Formal verification",
"maincontent": "Formal verification uses mathematical techniques to prove the correctness of a program.\n\nAdvantages: \nFormal verification can be used to prove the absence of errors. In contrast, testing can only prove the presence of errors, not their absence.\n\nDisadvantages:\nIt only proves the compliance with the specification, but not the actual utility of the software.\nIt requires highly specialized notations and knowledge which makes it an expensive technique to administer. Therefore, formal verifications are more commonly used in safety-critical software such as flight control systems."
},
{
"title": "Testing",
"header": "What",
"maincontent": "When testing, you execute a set of test cases. A test case specifies how to perform a test. At a minimum, it specifies the input to the software under test (SUT) and the expected behavior.\n\nTest cases can be determined based on the specification, reviewing similar existing systems, or comparing to the past behavior of the SUT.\n\nA test case failure is a mismatch between the expected behavior and the actual behavior. A failure indicates a potential defect (or a bug), unless the error is in the test case itself."
},
{
"title": "Testing types",
"header": "Regression testing",
"maincontent": "When you modify a system, the modification may result in some unintended and undesirable effects on the system. Such an effect is called a regression.\n\nRegression testing is the re-testing of the software to detect regressions. Note that to detect regressions, you need to retest all related components, even if they had been tested before.\n\nRegression testing is more effective when it is done frequently, after each small change. However, doing so can be prohibitively expensive if testing is done manually. Hence, regression testing is more practical when it is automated."
},
{
"title": "Testing types",
"header": "Developer testing",
"maincontent": "Delaying testing until the full product is complete has a number of disadvantages:\n\nLocating the cause of a test case failure is difficult due to a large search space; in a large system, the search space could be millions of lines of code, written by hundreds of developers! The failure may also be due to multiple inter-related bugs.\nFixing a bug found during such testing could result in major rework, especially if the bug originated from the design or during requirements specification i.e. a faulty design or faulty requirements.\nOne bug might 'hide' other bugs, which could emerge only after the first bug is fixed.\nThe delivery may have to be delayed if too many bugs are found during testing.\n\nearly testing of partially developed software is usually, and by necessity, done by the developers themselves i.e. developer testing."
},
{
"title": "Testing types",
"header": "Unit testing - Stub",
"maincontent": "A proper unit test requires the unit to be tested in isolation so that bugs in the dependencies cannot influence the test i.e. bugs outside of the unit should not affect the unit tests.\n\nStubs can isolate the SUT from its dependencies."
},
{
"title": "Testing types",
"header": "Integration testing",
"maincontent": "Integration testing : testing whether different parts of the software work together (i.e. integrates) as expected. Integration tests aim to discover bugs in the 'glue code' related to how components interact with each other. These bugs are often the result of misunderstanding what the parts are supposed to do vs what the parts are actually doing.\n\nIntegration testing is not simply a case of repeating the unit test cases using the actual dependencies (instead of the stubs used in unit testing). Instead, integration tests are additional test cases that focus on the interactions between the parts.\n\nIn practice, developers often use a hybrid of unit+integration tests to minimize the need for stubs."
},
{
"title": "Testing types",
"header": "System testing",
"maincontent": "System testing is typically done by a testing team (also called a QA team).\n\nSystem test cases are based on the specified external behavior of the system. Sometimes, system tests go beyond the bounds defined in the specification. This is useful when testing that the system fails 'gracefully' when pushed beyond its limits.\n\nSystem testing includes testing against non-functional requirements too"
},
{
"title": "Testing types",
"header": "Alpha and beta testing",
"maincontent": "Alpha testing is performed by the users, under controlled conditions set by the software development team.\n\nBeta testing is performed by a selected subset of target users of the system in their natural work setting.\n\nAn open beta release is the release of not-yet-production-quality-but-almost-there software to the general population. For example, Google’s Gmail was in 'beta' for many years before the label was finally removed."
},
{
"title": "Testing types",
"header": "Exploratory testing",
"maincontent": "Exploratory testing is ‘the simultaneous learning, test design, and test execution’ [source: bach-et-explained] whereby the nature of the follow-up test case is decided based on the behavior of the previous test cases. In other words, running the system and trying out various operations. It is called exploratory testing because testing is driven by observations during testing. Exploratory testing usually starts with areas identified as error-prone, based on the tester’s past experience with similar systems. One tends to conduct more tests for those operations where more faults are found.\n\nExploratory testing is also known as reactive testing, error guessing technique, attack-based testing, and bug hunting."
},
{
"title": "Testing types",
"header": "Exploratory versus scripted testing",
"maincontent": "Which approach is better – scripted or exploratory? A mix is better.\n\nThe success of exploratory testing depends on the tester’s prior experience and intuition. Exploratory testing should be done by experienced testers, using a clear strategy/plan/framework. Ad-hoc exploratory testing by unskilled or inexperienced testers without a clear strategy is not recommended for real-world non-trivial systems. While exploratory testing may allow us to detect some problems in a relatively short time, it is not prudent to use exploratory testing as the sole means of testing a critical system.\n\nScripted testing is more systematic, and hence, likely to discover more bugs given sufficient time, while exploratory testing would aid in quick error discovery, especially if the tester has a lot of experience in testing similar systems."
},
{
"title": "Testing types",
"header": "Acceptance testing",
"maincontent": "Acceptance tests give an assurance to the customer that the system does what it is intended to do. Acceptance test cases are often defined at the beginning of the project, usually based on the use case specification. Successful completion of UAT is often a prerequisite to the project sign-off."
},
{
"title": "Test Automation",
"header": "Tools",
"maincontent": "JUnit is a tool for automated testing of Java programs. Similar tools are available for other languages and for automating different types of testing.\n\nMost modern IDEs have integrated support for testing tools. The figure below shows the JUnit output when running some JUnit tests using the Eclipse IDE."
},
{
"title": "Test Automation",
"header": "Automated Testing of GUIs",
"maincontent": "If a software product has a GUI (Graphical User Interface) component, all product-level testing (i.e. the types of testing mentioned above) need to be done using the GUI. However, testing the GUI is much harder than testing the CLI (Command Line Interface) or API, for the following reasons:\n\nMost GUIs can support a large number of different operations, many of which can be performed in any arbitrary order.\nGUI operations are more difficult to automate than API testing. Reliably automating GUI operations and automatically verifying whether the GUI behaves as expected is harder than calling an operation and comparing its return value with an expected value. Therefore, automated regression testing of GUIs is rather difficult.\nThe appearance of a GUI (and sometimes even behavior) can be different across platforms and even environments. For example, a GUI can behave differently based on whether it is minimized or maximized, in focus or out of focus, and in a high resolution display or a low resolution display.\n\nMoving as much logic as possible out of the GUI can make GUI testing easier. That way, you can bypass the GUI to test the rest of the system using automated API testing. While this still requires the GUI to be tested, the number of such test cases can be reduced as most of the system will have been tested using automated API testing."
},
{
"title": "Test coverage",
"header": "What",
"maincontent": "Test coverage is a metric used to measure the extent to which testing exercises the code i.e., how much of the code is 'covered' by the tests.\nHere are some examples of different coverage criteria:\n\nFunction/method coverage : based on functions executed e.g., testing executed 90 out of 100 functions.\nStatement coverage : based on the number of lines of code executed e.g., testing executed 23k out of 25k LOC.\nDecision/branch coverage : based on the decision points exercised e.g., an if statement evaluated to both true and false with separate test cases during testing is considered 'covered'.\nCondition coverage : based on the boolean sub-expressions, each evaluated to both true and false with different test cases. Condition coverage is not the same as the decision coverage.\nPath coverage measures coverage in terms of possible paths through a given part of the code executed. 100% path coverage means all possible paths have been executed. A commonly used notation for path analysis is called the Control Flow Graph (CFG).\nEntry/exit coverage measures coverage in terms of possible calls to and exits from the operations in the SUT."
},
{
"title": "Test coverage",
"header": "How",
"maincontent": "Measuring coverage is often done using coverage analysis tools. Most IDEs have inbuilt support for measuring test coverage, or at least have plugins that can measure test coverage.\n\nCoverage analysis can be useful in improving the quality of testing e.g., if a set of test cases does not achieve 100% branch coverage, more test cases can be added to cover missed branches."
},
{
"title": "Dependency injection",
"header": "What",
"maincontent": "Dependency injection is the process of 'injecting' objects to replace current dependencies with a different object. This is often used to inject stubs to isolate the SUT from its dependencies so that it can be tested in isolation.\n\n"
},
{
"title": "Test-Driven Development",
"header": "What",
"maincontent": "Test-Driven Development(TDD) advocates writing the tests before writing the SUT, while evolving functionality and tests in small increments. In TDD you first define the precise behavior of the SUT using test cases, and then write the SUT to match the specified behavior. While TDD has its fair share of detractors, there are many who consider it a good way to reduce defects. One big advantage of TDD is that it guarantees the code is testable.\n\nNote that TDD does not imply writing all the test cases first before writing functional code. Rather, proceed in small steps:\n\nDecide what behavior to implement.\nWrite test cases to test that behavior.\nRun those test cases and watch them fail.\nImplement the behavior.\nRun the test cases.\nKeep modifying the code and rerunning test cases until they all pass.\nRefactor code to improve quality.\nRepeat the cycle for each small unit of behavior that needs to be implemented."
},
{
"title": "Test case design",
"header": "What",
"maincontent": "Except for trivial SUTs, exhaustive testing is not practical because such testing often requires a massive/infinite number of test cases.\n\nEvery test case adds to the cost of testing. In some systems, a single test case can cost thousands of dollars e.g. on-field testing of flight-control software. Therefore, test cases need to be designed to make the best use of testing resources. In particular:\n\nTesting should be effective i.e., it finds a high percentage of existing bugs e.g., a set of test cases that finds 60 defects is more effective than a set that finds only 30 defects in the same system.\nTesting should be efficient i.e., it has a high rate of success (bugs found/test cases) a set of 20 test cases that finds 8 defects is more efficient than another set of 40 test cases that finds the same 8 defects.\n\nFor testing to be E&E, each new test you add should be targeting a potential fault that is not already targeted by existing test cases. There are test case design techniques that can help us improve the E&E of testing."
},
{
"title": "Test case design",
"header": "Positive vs Negative Test Cases",
"maincontent": "A positive test case is when the test is designed to produce an expected/valid behavior. On the other hand, a negative test case is designed to produce a behavior that indicates an invalid/unexpected situation, such as an error message."
},
{
"title": "Test case design",
"header": "Black Box vs Glass Box",
"maincontent": "Test case design can be of three types, based on how much of the SUT's internal details are considered when designing test cases:\n\nBlack-box (aka specification-based or responsibility-based) approach: test cases are designed exclusively based on the SUT’s specified external behavior.\nWhite-box (aka glass-box or structured or implementation-based) approach: test cases are designed based on what is known about the SUT’s implementation, i.e. the code.\nGray-box approach: test case design uses some important information about the implementation. For example, if the implementation of a sort operation uses different algorithms to sort lists shorter than 1000 items and lists longer than 1000 items, more meaningful test cases can then be added to verify the correctness of both algorithms."
},
{
"title": "Test case design",
"header": "Equivalence partitions",
"maincontent": "In general, most SUTs do not treat each input in a unique way. Instead, they process all possible inputs in a small number of distinct ways. That means a range of inputs is treated the same way inside the SUT. Equivalence partitioning (EP) is a test case design technique that uses the above observation to improve the E&E of testing. By dividing possible inputs into equivalence partitions you can,\n\navoid testing too many inputs from one partition. Testing too many inputs from the same partition is unlikely to find new bugs. This increases the efficiency of testing by reducing redundant test cases.\nensure all partitions are tested. Missing partitions can result in bugs going unnoticed. This increases the effectiveness of testing by increasing the chance of finding bugs."
},
{
"title": "Test case design",
"header": "Boundary value analysis",
"maincontent": "Boundary Value Analysis (BVA) is a test case design heuristic that is based on the observation that bugs often result from incorrect handling of boundaries of equivalence partitions. This is not surprising, as the end points of boundaries are often used in branching instructions, etc., where the programmer can make mistakes.\n\nBVA suggests that when picking test inputs from an equivalence partition, values near boundaries (i.e. boundary values) are more likely to find bugs.\n\nBoundary values are sometimes called corner cases.\n\nTypically, you should choose three values around the boundary to test: one value from the boundary, one value just below the boundary, and one value just above the boundary. The number of values to pick depends on other factors, such as the cost of each test case."
},
{
"title": "Test case design",
"header": "Combining test inputs",
"maincontent": "An SUT can take multiple inputs. You can select values for each input (using equivalence partitioning, boundary value analysis, or some other technique).\n\nTesting all possible combinations is effective but not efficient. If you test all possible combinations for the above example, you need to test 6x5x2x6=360 cases. Doing so has a higher chance of discovering bugs (i.e. effective) but the number of test cases will be too high (i.e. not efficient). Therefore, you need smarter ways to combine test inputs that are both effective and efficient."
},
{
"title": "Test case design",
"header": "Test Input Combination Strategies",
"maincontent": "Given below are some basic strategies for generating a set of test cases by combining multiple test inputs.\n\nEach Valid Input at Least Once in a Positive Test Case\nNo More Than One Invalid Input In A Test Case\nMix between the two"
},
{
"title": "Test case design",
"header": "Testing Based on Use Cases",
"maincontent": "Use cases can be used for system testing and acceptance testing. For example, the main success scenario can be one test case while each variation (due to extensions) can form another test case. However, note that use cases do not specify the exact data entered into the system. Instead, it might say something like user enters his personal data into the system. Therefore, the tester has to choose data by considering equivalence partitions and boundary values. The combinations of these could result in one use case producing many test cases.\n\nTo increase the E&E of testing, high-priority use cases are given more attention. For example, a scripted approach can be used to test high-priority test cases, while an exploratory approach is used to test other areas of concern that could emerge during testing."
},
{
"title": "Revision Control",
"header": "What",
"maincontent": "Revision control is the process of managing multiple versions of a piece of information. In its simplest form, this is something that many people do by hand: every time you modify a file, save it under a new name that contains a number, each one higher than the number of the preceding version.\n\nManually managing multiple versions of even a single file is an error-prone task, though, so software tools to help automate this process have long been available. The earliest automated revision control tools were intended to help a single user to manage revisions of a single file. Over the past few decades, the scope of revision control tools has expanded greatly; they now manage multiple files, and help multiple people to work together. The best modern revision control tools have no problem coping with thousands of people working together on projects that consist of hundreds of thousands of files.\n\nRevision control software will track the history and evolution of your project, so you don't have to. For every change, you'll have a log of who made it; why they made it; when they made it; and what the change was.\n\nRevision control software makes it easier for you to collaborate when you're working with other people. For example, when people more or less simultaneously make potentially incompatible changes, the software will help you to identify and resolve those conflicts.\n\nIt can help you to recover from mistakes. If you make a change that later turns out to be an error, you can revert to an earlier version of one or more files. In fact, a really good revision control tool will even help you to efficiently figure out exactly when a problem was introduced.\n\nIt will help you to work simultaneously on, and manage the drift between, multiple versions of your project. Most of these reasons are equally valid, at least in theory, whether you're working on a project by yourself, or with a hundred other people.\n-- [adapted from bryan-mercurial-guide]"
},
{
"title": "Revision Control",
"header": "Repositories",
"maincontent": "The repository is the database where the meta-data about the revision history are stored. Suppose you want to apply revision control on files in a directory called ProjectFoo. In that case, you need to set up a repo (short for repository) in the ProjectFoo directory, which is referred to as the working directory of the repo. For example, Git uses a hidden folder named .git inside the working directory.\n\nYou can have multiple repos in your computer, each repo revision-controlling files of a different working directory, for examples, files of different projects."
},
{
"title": "Revision Control",
"header": "Saving History",
"maincontent": "Tracking and ignoring\nIn a repo, you can specify which files to track and which files to ignore. Some files such as temporary log files created during the build/test process should not be revision-controlled.\n\nStaging and committing\nCommitting saves a snapshot of the current state of the tracked files in the revision control history. Such a snapshot is also called a commit (i.e. the noun).\n\nWhen ready to commit, you first stage the specific changes you want to commit. This intermediate step allows you to commit only some changes while saving other changes for a later commit."
},
{
"title": "Revision Control",
"header": "Revision ControlUsing History",
"maincontent": "RCS tools store the history of the working directory as a series of commits. This means you should commit after each change that you want the RCS to 'remember'.\n\nEach commit in a repo is a recorded point in the history of the project that is uniquely identified by an auto-generated hash e.g. a16043703f28e5b3dab95915f5c5e5bf4fdc5fc1.\n\nYou can tag a specific commit with a more easily identifiable name e.g. v1.0.2.\n\nTo see what changed between two points of the history, you can ask the RCS tool to diff the two commits in concern.\n\nTo restore the state of the working directory at a point in the past, you can checkout the commit in concern. i.e., you can traverse the history of the working directory simply by checking out the commits you are interested in."
},
{
"title": "Revision Control",
"header": "Remote Repositories",
"maincontent": "Remote repositories are repos that are hosted on remote computers and allow remote access. They are especially useful for sharing the revision history of a codebase among team members of a multi-person project. They can also serve as a remote backup of your codebase.\n\nIt is possible to set up your own remote repo on a server, but the easier option is to use a remote repo hosting service such as GitHub or BitBucket.\n\nYou can clone a repo to create a copy of that repo in another location on your computer. The copy will even have the revision history of the original repo i.e., identical to the original repo. For example, you can clone a remote repo onto your computer to create a local copy of the remote repo.\n\nWhen you clone from a repo, the original repo is commonly referred to as the upstream repo. A repo can have multiple upstream repos. For example, let's say a repo repo1 was cloned as repo2 which was then cloned as repo3. In this case, repo1 and repo2 are upstream repos of repo3.\n\nYou can pull from one repo to another, to receive new commits in the second repo, if the repos have a shared history. Let's say some new commits were added to the upstream repo after you cloned it and you would like to copy over those new commits to your own clone i.e., sync your clone with the upstream repo. In that case, you pull from the upstream repo to your clone.\n\nYou can push new commits in one repo to another repo which will copy the new commits onto the destination repo. Note that pushing to a repo requires you to have write-access to it. Furthermore, you can push between repos only if those repos have a shared history among them (i.e., one was created by copying the other at some point in the past).\n\nCloning, pushing, and pulling can be done between two local repos too, although it is more common for them to involve a remote repo.\n\nA repo can work with any number of other repositories as long as they have a shared history e.g., repo1 can pull from (or push to) repo2 and repo3 if they have a shared history between them.\n\nA fork is a remote copy of a remote repo. As you know, cloning creates a local copy of a repo. In contrast, forking creates a remote copy of a Git repo hosted on GitHub. This is particularly useful if you want to play around with a GitHub repo but you don't have write permissions to it; you can simply fork the repo and do whatever you want with the fork as you are the owner of the fork.\n\nA pull request (PR for short) is a mechanism for contributing code to a remote repo, i.e., \"I'm requesting you to pull my proposed changes to your repo\". For this to work, the two repos must have a shared history. The most common case is sending PRs from a fork to its upstream repo."
},
{
"title": "Revision Control",
"header": "Branching",
"maincontent": "Branching is the process of evolving multiple versions of the software in parallel. For example, one team member can create a new branch and add an experimental feature to it while the rest of the team keeps working on another branch. Branches can be given names e.g. master, release, dev.\n\nA branch can be merged into another branch. Merging usually results in a new commit that represents the changes done in the branch being merged.\n\nMerge conflicts happen when you try to merge two branches that had changed the same part of the code and the RCS cannot decide which changes to keep. In those cases, you have to ‘resolve’ the conflicts manually."
},
{
"title": "Revision Control",
"header": "DRCS vs CRCS",
"maincontent": "RCS can be done in two ways: the centralized way and the distributed way.\n\nCentralized RCS (CRCS for short) uses a central remote repo that is shared by the team. Team members download (‘pull’) and upload (‘push’) changes between their own local repositories and the central repository. Older RCS tools such as CVS and SVN support only this model. Note that these older RCS do not support the notion of a local repo either. Instead, they force users to do all the versioning with the remote repo.\n\nDistributed RCS (DRCS for short, also known as Decentralized RCS) allows multiple remote repos and pulling and pushing can be done among them in arbitrary ways. The workflow can vary differently from team to team. For example, every team member can have his/her own remote repository in addition to their own local repository, as shown in the diagram below. Git and Mercurial are some prominent RCS tools that support the distributed approach."
},
{
"title": "Revision Control",
"header": "Forking Flow",
"maincontent": "In the forking workflow, the 'official' version of the software is kept in a remote repo designated as the 'main repo'. All team members fork the main repo and create pull requests from their fork to the main repo.\n\nOne main benefit of this workflow is that it does not require most contributors to have write permissions to the main repository. Only those who are merging PRs need write permissions. The main drawback of this workflow is the extra overhead of sending everything through forks."
},
{
"title": "Revision Control",
"header": "Feature Branch Flow",
"maincontent": "Feature branch workflow is similar to forking workflow except there are no forks. Everyone is pushing/pulling from the same remote repo. The phrase feature branch is used because each new feature (or bug fix, or any other modification) is done in a separate branch and merged to the master branch when ready. Pull requests can still be created within the central repository, from the feature branch to the main branch.\n\nAs this workflow require all team members to have write access to the repository,\n\nit is better to protect the main branch using some mechanism, to reduce the risk of accidental undesirable changes to it.\nit is not suitable for situations where the code contributors are not 'trusted' enough to be given write permission."
},
{
"title": "Revision Control",
"header": "Centralized",
"maincontent": "The centralized workflow is similar to the feature branch workflow except all changes are done in the master branch."
},
{
"title": "Project planning ",
"header": "Work Breakdown Structure",
"maincontent": "A Work Breakdown Structure (WBS) depicts information about tasks and their details in terms of subtasks. When managing projects, it is useful to divide the total work into smaller, well-defined units. Relatively complex tasks can be further split into subtasks. In complex projects, a WBS can also include prerequisite tasks and effort estimates for each task.\n\nThe effort is traditionally measured in man hour/day/month i.e. work that can be done by one person in one hour/day/month. The Task ID is a label for easy reference to a task. Simple labeling is suitable for a small project, while a more informative labeling system can be adopted for bigger projects.\n\nAll tasks should be well-defined. In particular, it should be clear as to when the task will be considered done."
},
{
"title": "Project planning ",
"header": "Milestone",
"maincontent": "A milestone is the end of a stage which indicates significant progress. You should take into account dependencies and priorities when deciding on the features to be delivered at a certain milestone.\n\nIn some projects, it is not practical to have a very detailed plan for the whole project due to the uncertainty and unavailability of required information. In such cases, you can use a high-level plan for the whole project and a detailed plan for the next few milestones."
},
{
"title": "Project planning ",
"header": "Buffers",
"maincontent": "A buffer is time set aside to absorb any unforeseen delays. It is very important to include buffers in a software project schedule because effort/time estimations for software development are notoriously hard. However, do not inflate task estimates to create hidden buffers; have explicit buffers instead. Reason: With explicit buffers, it is easier to detect incorrect effort estimates which can serve as feedback to improve future effort estimates."
},
{
"title": "Project planning ",
"header": "Issue Trackers",
"maincontent": "Keeping track of project tasks (who is doing what, which tasks are ongoing, which tasks are done etc.) is an essential part of project management. In small projects, it may be possible to keep track of tasks using simple tools such as online spreadsheets or general-purpose/light-weight task tracking tools such as Trello. Bigger projects need more sophisticated task tracking tools.\n\nIssue trackers (sometimes called bug trackers) are commonly used to track task assignment and progress. Most online project management software such as GitHub, SourceForge, and BitBucket come with an integrated issue tracker."
},
{
"title": "Project planning ",
"header": "GANTT Chart",
"maincontent": "A Gantt chart is a 2-D bar-chart, drawn as time vs tasks (represented by horizontal bars).\n\nIn a Gantt chart, a solid bar represents the main task, which is generally composed of a number of subtasks, shown as grey bars. The diamond shape indicates an important deadline/deliverable/milestone."
},
{
"title": "Project planning ",
"header": "PERT Charts",
"maincontent": "A PERT (Program Evaluation Review Technique) chart uses a graphical technique to show the order/sequence of tasks. It is based on the simple idea of drawing a directed graph in which:\n\nNodes or vertices capture the effort estimations of tasks, and\nArrows depict the precedence between tasks\n\nA PERT chart can help determine the following important information:\n\nThe order of tasks. In the example above, Final Testing cannot begin until all coding of individual subsystems have been completed.\nWhich tasks can be done concurrently. In the example above, the various subsystem designs can start independently once the High level design is completed.\nThe shortest possible completion time. In the example above, there is a path (indicated by the shaded boxes) from start to end that determines the shortest possible completion time.\nThe Critical Path. In the example above, the shortest possible path is also the critical path.\n\nCritical path is the path in which any delay can directly affect the project duration. It is important to ensure tasks on the critical path are completed on time."
},
{
"title": "Teamwork",
"header": "Team Structures",
"maincontent": "Given below are three commonly used team structures in software development. Irrespective of the team structure, it is a good practice to assign roles and responsibilities to different team members so that someone is clearly in charge of each aspect of the project. In comparison, the ‘everybody is responsible for everything’ approach can result in more chaos and hence slower progress.\n\nEgoless team\nIn this structure, every team member is equal in terms of responsibility and accountability. When any decision is required, consensus must be reached. This team structure is also known as a democratic team structure. This team structure usually finds a good solution to a relatively hard problem as all team members contribute ideas.\n\nHowever, the democratic nature of the team structure bears a higher risk of falling apart due to the absence of an authority figure to manage the team and resolve conflicts.\n\nChief programmer team\nFrederick Brooks proposed that software engineers learn from the medical surgical team in an operating room. In such a team, there is always a chief surgeon, assisted by experts in other areas. Similarly, in a chief programmer team structure, there is a single authoritative figure, the chief programmer. Major decisions, e.g. system architecture, are made solely by him/her and obeyed by all other team members. The chief programmer directs and coordinates the effort of other team members. When necessary, the chief will be assisted by domain specialists e.g. business specialists, database experts, network technology experts, etc. This allows individual group members to concentrate solely on the areas in which they have sound knowledge and expertise.\n\nThe success of such a team structure relies heavily on the chief programmer. Not only must he/she be a superb technical hand, he/she also needs good managerial skills. Under a suitably qualified leader, such a team structure is known to produce successful work.\n\nStrict hierarchy team\nAt the opposite extreme of an egoless team, a strict hierarchy team has a strictly defined organization among the team members, reminiscent of the military or a bureaucratic government. Each team member only works on his/her assigned tasks and reports to a single “boss”.\n\nIn a large, resource-intensive, complex project, this could be a good team structure to reduce communication overhead."
},
{
"title": "SDLC process models",
"header": "What",
"maincontent": "Software development goes through different stages such as requirements, analysis, design, implementation and testing. These stages are collectively known as the software development life cycle (SDLC). There are several approaches, known as software development life cycle models (also called software process models), that describe different ways to go through the SDLC. Each process model prescribes a \"roadmap\" for the software developers to manage the development effort. The roadmap describes the aims of the development stage(s), the artifacts or outcome of each stage, as well as the workflow i.e. the relationship between stages."
},
{
"title": "SDLC process models",
"header": "Sequential Models",
"maincontent": "The sequential model, also called the waterfall model, models software development as a linear process, in which the project is seen as progressing steadily in one direction through the development stages. The name waterfall stems from how the model is drawn to look like a waterfall (see below).\n\nWhen one stage of the process is completed, it should produce some artifacts to be used in the next stage. For example, upon completion of the requirements stage, a comprehensive list of requirements is produced that will see no further modifications. A strict application of the sequential model would require each stage to be completed before starting the next.\n\nThis could be a useful model when the problem statement is well-understood and stable. In such cases, using the sequential model should result in a timely and systematic development effort, provided that all goes well. As each stage has a well-defined outcome, the progress of the project can be tracked with relative ease.\n\nThe major problem with this model is that the requirements of a real-world project are rarely well-understood at the beginning and keep changing over time. One reason for this is that users are generally not aware of how a software application can be used without prior experience in using a similar application."
},
{
"title": "SDLC process models",
"header": "Iterative Models",
"maincontent": "The iterative model (sometimes called iterative and incremental) advocates having several iterations of SDLC. Each of the iterations could potentially go through all the development stages, from requirements gathering to testing & deployment. Roughly, it appears to be similar to several cycles of the sequential model.\n\nIn this model, each of the iterations produces a new version of the product. Feedback on the new version can then be fed to the next iteration. Taking the Minesweeper game as an example, the iterative model will deliver a fully playable version from the early iterations. However, the first iteration will have primitive functionality, for example, a clumsy text based UI, fixed board size, limited randomization, etc. These functionalities will then be improved in later releases.\n\nThe iterative model can take a breadth-first or a depth-first approach to iteration planning.\n\nbreadth-first: an iteration evolves all major components in parallel e.g., add a new feature fully, or enhance an existing feature.\ndepth-first: an iteration focuses on fleshing out only some components e.g., update the backend to support a new feature that will be added in a future iteration.\n\nMost projects use a mixture of breadth-first and depth-first iterations i.e., an iteration can contain some breadth-first work as well as some depth-first work."
},
{
"title": "SDLC process models example",
"header": "Agile Models",
"maincontent": "In 2001, a group of prominent software engineering practitioners met and brainstormed for an alternative to documentation-driven, heavyweight software development processes that were used in most large projects at the time. This resulted in something called the agile manifesto (a vision statement of what they were looking to do).\n\nYou are uncovering better ways of developing software by doing it and helping others do it.\n\nThrough this work you have come to value:\n\nIndividuals and interactions over processes and tools\nWorking software over comprehensive documentation\nCustomer collaboration over contract negotiation\nResponding to change over following a plan\n\nThat is, while there is value in the items on the right, you value the items on the left more.\n\n-- Extract from the Agile Manifesto\nSubsequently, some of the signatories of the manifesto went on to create process models that try to follow it. These processes are collectively called agile processes. Some of the key features of agile approaches are:\n\nRequirements are prioritized based on the needs of the user, are clarified regularly (at times almost on a daily basis) with the entire project team, and are factored into the development schedule as appropriate.\nInstead of doing a very elaborate and detailed design and a project plan for the whole project, the team works based on a rough project plan and a high level design that evolves as the project goes on.\nThere is a strong emphasis on complete transparency and responsibility sharing among the team members. The team is responsible together for the delivery of the product. Team members are accountable, and regularly and openly share progress with each other and with the user.\n\nThere are a number of agile processes in the development world today. eXtreme Programming (XP) and Scrum are two of the well-known ones."
},
{
"title": "SDLC process models example",
"header": "XP",
"maincontent": "The following description was adapted from the XP home page, emphasis added:\n\nExtreme Programming (XP) stresses customer satisfaction. Instead of delivering everything you could possibly want on some date far in the future, this process delivers the software you need as you need it.\n\nXP aims to empower developers to confidently respond to changing customer requirements, even late in the life cycle.\n\nXP emphasizes teamwork. Managers, customers, and developers are all equal partners in a collaborative team. XP implements a simple, yet effective environment enabling teams to become highly productive. The team self-organizes around the problem to solve it as efficiently as possible.\n\nXP aims to improve a software project in five essential ways: communication, simplicity, feedback, respect, and courage. Extreme Programmers constantly communicate with their customers and fellow programmers. They keep their design simple and clean. They get feedback by testing their software starting on day one. Every small success deepens their respect for the unique contributions of each and every team member. With this foundation, Extreme Programmers are able to courageously respond to changing requirements and technology.\n\nXP has a set of simple rules. XP is a lot like a jig saw puzzle with many small pieces. Individually the pieces make no sense, but when combined together a complete picture can be seen. This flow chart shows how Extreme Programming's rules work together.\n\nPair programming, CRC cards, project velocity, and standup meetings are some interesting topics related to XP. Refer to extremeprogramming.org to find out more about XP."
},
{
"title": "SDLC process models example",
"header": "Scrum",
"maincontent": "This description of Scrum was adapted from Wikipedia [retrieved on 18/10/2011], emphasis added:\n\nScrum is a process skeleton that contains sets of practices and predefined roles. The main roles in Scrum are:\n\nThe Scrum Master, who maintains the processes (typically in lieu of a project manager)\nThe Product Owner, who represents the stakeholders and the business\nThe Team, a cross-functional group who do the actual analysis, design, implementation, testing, etc.\n\nA Scrum project is divided into iterations called Sprints. A sprint is the basic unit of development in Scrum. Sprints tend to last between one week and one month, and are a timeboxed (i.e. restricted to a specific duration) effort of a constant length.\n\nEach sprint is preceded by a planning meeting, where the tasks for the sprint are identified and an estimated commitment for the sprint goal is made, and followed by a review or retrospective meeting, where the progress is reviewed and lessons for the next sprint are identified.\n\nDuring each sprint, the team creates a potentially deliverable product increment (for example, working and tested software). The set of features that go into a sprint come from the product backlog, which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint, and records this in the sprint backlog. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. Development is timeboxed such that the sprint must end on time; if requirements are not completed for any reason they are left out and returned to the product backlog. After a sprint is completed, the team demonstrates the use of the software.\n\nScrum enables the creation of self-organizing teams by encouraging co-location of all team members, and verbal communication between all team members and disciplines in the project.\n\nA key principle of Scrum is its recognition that during a project the customers can change their minds about what they want and need (often called requirements churn), and that unpredicted challenges cannot be easily addressed in a traditional predictive or planned manner. As such, Scrum adopts an empirical approach—accepting that the problem cannot be fully understood or defined, focusing instead on maximizing the team’s ability to deliver quickly and respond to emerging requirements."
},
{
"title": "SDLC process models example",
"header": "Daily Scrum",
"maincontent": "Daily Scrum is another key scrum practice. The description below was adapted from https://www.mountaingoatsoftware.com (emphasis added):\n\nIn Scrum, on each day of a sprint, the team holds a daily scrum meeting called the daily scrum. Meetings are typically held in the same location and at the same time each day. Ideally, a daily scrum meeting is held in the morning, as it helps set the context for the coming day's work. These scrum meetings are strictly time-boxed to 15 minutes. This keeps the discussion brisk but relevant.\n\n...\n\nDuring the daily scrum, each team member answers the following three questions:\n\nWhat did you do yesterday?\nWhat will you do today?\nAre there any impediments in your way?\n\n...\n\nThe daily scrum meeting is not used as a problem-solving or issue resolution meeting. Issues that are raised are taken offline and usually dealt with by the relevant subgroup immediately after the meeting."
},
{
"title": "SDLC process models example",
"header": "Unified Process",
"maincontent": "The unified process is developed by the Three Amigos - Ivar Jacobson, Grady Booch and James Rumbaugh (the creators of UML).\n\nThe unified process consists of four phases: inception, elaboration, construction and transition. The main purpose of each phase can be summarized as follows:\n\nGiven above is a visualization of a project done using the Unified process (source: Wikipedia). As the diagram shows, a phase can consist of several iterations. Each vertical column (labeled “I1” “E1”, “E2”, “C1”, etc.) represents a single iteration. Each of the iterations consists of a set of ‘workflows’ such as ‘Business modeling’, ‘Requirements’, ‘Analysis & Design’, etc. The shaded region indicates the amount of resources and effort spent on a particular workflow in a particular iteration.\n\nUnified process is a flexible and customizable process model framework rather than a single fixed process. For example, the number of iterations in each phase, definition of workflows, and the intensity of a given workflow in a given iteration can be adjusted according to the nature of the project. Take the Construction Phase: to develop a simple system, one or two iterations would be sufficient. For a more complicated system, multiple iterations will be more helpful. Therefore, the diagram above simply records a particular application of the UP rather than prescribe how the UP is to be applied. However, this record can be refined and reused for similar future projects."
},
{
"title": "Principles",
"header": "SDLC process models example - CMMI",
"maincontent": "CMMI (Capability Maturity Model Integration) is a process improvement approach defined by Software Engineering Institute at Carnegie Melon University. CMMI provides organizations with the essential elements of effective processes, which will improve their performance. -- adapted from http://www.sei.cmu.edu/cmmi/\n\nCMMI defines five maturity levels for a process and provides criteria to determine if the process of an organization is at a certain maturity level. The diagram below [taken from Wikipedia] gives an overview of the five levels."
},
{
"title": "Principles",
"header": "Single Responsibility Principle",
"maincontent": "ingle responsibility principle (SRP): A class should have one, and only one, reason to change. -- Robert C. Martin\n\nIf a class has only one responsibility, it needs to change only when there is a change to that responsibility."
},
{
"title": "Principles",
"header": "The Open-Closed Principle",
"maincontent": "Open-closed principle (OCP): A module should be open for extension but closed for modification. That is, modules should be written so that they can be extended, without requiring them to be modified. -- proposed by Bertrand Meyer\n\nIn object-oriented programming, OCP can be achieved in various ways. This often requires separating the specification (i.e. interface) of a module from its implementation."
},
{
"title": "Principles",
"header": "Liskov substitution principle",