9 pages
of 9
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
For many years Iron Age stone-walled structures in South Africa have been studied using aerial photography. The development of new technology such as Google Earth and Geographic Information Systems has resulted in an increase in such studies. What
  South African Archaeological Bulletin 69  (199): 00–00, 2014 1 Field and Technical Report  INTER-ANALYST VARIABILITY IN IDENTIFICATION ANDCLASSIFICATION OF PRE-COLONIAL STONE WALLEDSTRUCTURES USING GOOGLE EARTH, SOUTHERN GAUTENG,SOUTH AFRICA TAMSIN HUNT & KARIM SADR School of Geography, Archaeology and Environmental Studies, University of the Witwatersrand, Johannesburg E-mail: (Received November 2013. Revised February 2014) INTRODUCTION In South Africa, aerial photographs and satellite imageryhave been used to identify and classify pre-colonial stone-walled structures (e.g. Mason 1968; Seddon 1968; Maggs 1976;Taylor1979;Sadr&Rodier2012).Theresultshavebeenusedtodraw far-ranging conclusions about the peopling of the land-scape and changes in the social, political and economicorganisation of its inhabitants. The different architecturalstylesofthesestone-walledstructures(SWS)areconsideredto be chronological markers as well as emblematic of the ethnicand linguistic identity of their occupants (e.g. Huffman 2007:31–54). Given the importance of style in these studies, thequestionarisesastohowreliablyandconsistentlythisattributeisclassifiedbydifferentresearchers.ThepublisheddescriptionsanddefinitionsofSWSstyles(e.g.Mason1968:335–343;Taylor1979: 10–11; Huffman 2007: 33–46) seem straightforward andare based on the plan view of the structures. In his pioneering study,Mason(1968:172–173)testedforinter-analystvariabilityin the identification of stylistic types of SWS and showed thattherewasconsiderablevariation.Maggs(1976:28)pointedoutthat for a variety of reasons many SWS (what he called settle-ment units) are difficult to classify on aerial photographs.Consequently, he decided not to attempt to classify each indi-vidual SWS but rather to classify settlements, his term for acluster of SWS within a few hundred metres of each other. Inthis paper, we test in more detail the difficulties of classifying SWS by using new technologies such as Google Earth andGeographicalInformationSystems(GIS)software,soastofindout how much and what kind of effect inter-analyst variabilitymight have on the interpretation of past settlement patternsand social organisation.Theproblemofinter-analystvariability(orcoderreliabilityasitissometimescalled),hasbeenstudiedinmanydisciplines(e.g. Kalton & Stowell 1979; Milne & Adler 1999; Oldenburg  et al.  1999; Compton  et al . 2012). In archaeology, recent exam-ples concern analysis of cut marks on bone (Abe  et al . 2002),skeletal measurements (Adams & Byrd 2002; Albarella 2002)and artefact classification (Beck & Jones 1989; Whittaker  et al .1998; Gnaden & Holdaway 2000). Lyman and VanPool (2009)provide a thorough review of the relevant literature oninter-analyst variability in archaeology and emphasise theneed for more studies to ensure that analytic results arereplicable.For our study of inter-analyst variability we focused on a400 km 2 survey parcel, Pam 2, in the southern part of Gauteng ProvinceinSouthAfrica.Anaverageofabout200ancientruinshad been classified in this area by three independent analystsusingamodifiedversionofthetypologyformulatedbyTaylor(1979:10–11).Histypologyincludedthreemaintypeswhichhereferred to as Group I, Group II and Group III. The principalidentifyingattributeofeachofthesethreetypesmainlyconcernstheshapeoftheoutercompoundwalloftheSWSandtosomeextent the organisation of internal enclosures. We have addeda new type, Group IV, which refers to a type of SWS notencountered by Taylor in his study area. For training, eachanalyst was given the same document describing the fourSWS types and their principal identifying attributes (essen-tially the type descriptions and illustrations in Sadr & Rodier2012: 1036–1037). They were shown how to use Google Earth(GE) software as well as the project’s standard procedures forspotting, marking, digitising and classifying SWS on GoogleEarth. The survey parcel Pam 2 (Fig. 1) is located just south of the Suikerbosrand Nature Reserve, between Johannesburg and the Vaal River. The region is characterised by open grass-land which makes the SWS generally easy to spot on satelliteimagery.TheefficacyofusingGoogleEarthimageryforsuchatask had been shown in a previous study (MacQuilkan & Sadr2010), and the ‘Historical Imagery’ tool in Google Earth isparticularlyusefulhereasitallowstheanalysttoviewthesamesite in different seasons and at different times of the day. Forfuturestudies,itwillbeinterestingtousealsotheearliestavail-able aerial photographs from the 1930s and 1940s which inmany cases show less vegetation cover and better preservedstone walls (Tim Maggs, pers. comm. 2014).For the examination of inter-analyst variability, all SWSidentifiedandclassifiedbythethreeresearcherswereimportedinto QGIS 1.8.0 software. Each file was converted into ashapefileandprojectedtoUTM35S.Eachanalyst’soutputwasmeasuredwithdescriptivestatisticssuchasthemean,median,range and standard deviation of numbers of sites identifiedin each type of SWS, their size, altitude above sea level anddistance to nearest neighbours. The distributions of their siteswere subjected to various spatial analyses as described below.Mostofthetoolstocarryoutthesemeasurementsandanalysesare available within QGIS software, and include the basicstatisticstoolaswellasthepointsamplingplug-ins.Forfurtherspatial analyses such as measuring centroids, standard devia-tion ellipses of site distributions and hotspots, each analyst’sshapefiles were imported into the spatial statistics programCrimeStat III (Levine 2010). For site by site comparison, a data- basewascreatedinMicrosoftExceltorecordeachSWSagainstits size and type as recorded by each of the three analysts.Finally a visual comparison was made on Google Earth to seehow each analyst defined the outline and classified each of their SWS.Theresultsshowhighinter-analystvariabilityintheidenti-ficationandclassificationofSWSinPam2.Afterdescribingtheanalyses in more detail below, we discuss several factors that  may have influenced the degree of inter-analyst variability,such as the level of the analyst’s competence, enthusiasm,dedication, perseverance, training and experience, as well asthestateofpreservationoftheruins,thevegetationcover,andthe resolution of the Google Earth imagery available for Pam 2at the time of the study. We conclude that the inter-analystvariability is inevitable when using aerial or satellite imagery, but that a possible solution lies in approaching the problem ata different scale. This point is described in more detail in thediscussion section below. INTER-ANALYST VARIABILITY The three analysts identified different numbers of SWS inPam 2 (Table 1). The range was from 160 to 341 sites; clearly asignificant difference. The range in the number of SWS classi-fied as Group I was 41–81 (Table 1). There was equally highdisagreement on the total number of Group II sites, with arangebetweenthethreeanalystsfrom13–31.GroupsIIIandIVshowed even higher variation between analysts with 8.1and 7.3 fold difference between highest and lowest numbersrespectively. Each analyst included a type named ‘Unknown’in which they placed those structures that they could notclassify as any of the four groups. Not surprisingly, there ismuch variation in these figures as well.How many SWS were identically classified by all threeanalysts?Table2showsthat119SWSwereclassifiedasGroup1 by at least one analyst. But only 19 of these were classified asGroupIbyallthree,showingonlya16%three-wayagreement.In a two-way comparison the results are a bit better andanalysts T and K agreed on 19% of the sites, while K and Magreed on 27%, and T and M agreed on 34%. These are lowlevels of agreement indeed: in the general literature on thissubject, a more or less 80% level of inter-analyst agreement isrecommendediftheresultsaretobeconsideredreproducible. 2 South African Archaeological Bulletin 69  (199): 00–00, 2014 FIG.1 .  Map of the study area showing contour lines at 20 m intervals, the survey parcels mentioned in the text, and the provincial boundary between Gauteng (GP), Mpumalanga (MP) and the Free State (FS). TABLE1 . The number of structures, average areas of SWS, and nearest neighbour indices (NNI) of each group of SWS as identified by eachanalyst.- Analyst T Analyst M Analyst K No. SWS  % SWS  Area sq. m NNI No. SWS  % SWS  Area sq. m NNI No. SWS  % SWS  Area sq. m NNIAll SWS 230  100  160  100  341  100 Group I 75  32.6  3345 0.53 81  50.6  3420 0.51 41  12.0  2537 0.64Group II 13  5.7   4845 0.54 31  19.4  4796 0.39 27  7.9  7734 0.27Group III 65  28.3  6918 0.32 8  5.0  9417 1.04 29  8.5  6107 0.54Group IV 73  31.7   181 0.12 26  16.3  176 0.47 196  57.5  989 0.16Unknown 4  1.7   14  8.8  48  14.1  South African Archaeological Bulletin 69  (199): 00–00, 2014 3 The highest levels of agreement were achieved withGroupIISWS.Thethree-wayagreementof26%intheclassifi-cationofGroup2SWSislow,butanalystsKandMmanagedtoreacha60%levelofagreementontheseSWS.GroupIISWSarecharacterisedbyadiagnostic,deeplyscallopedouterwall.Thismay explain the relatively higher rate of agreement for thistype. Nonetheless, the rate of agreement is still well below thedesiredtarget.Thelevelsofthree-wayagreementforGroupIIIsites are much lower than for Group II. Group III sites are theleast well-defined type which may explain the extremely lowlevels of agreement in identifying this type. But on the otherhand,GroupIVSWSwhichareclearlydefinedassimplestonecircleswithnointernalenclosuresdonotscoremuchhigheroninter-analyst agreement. In total, of the 382 individual SWSidentified by at least one analyst, only 46 were classified thesame way by all three analysts, a meagre 12% three-way totalagreement.AnexaminationoftheSWSoutlinesdrawnbyeachanalysthelps clarify why structures were classified differently. Oftenall three analysts would classify an SWS as the same type butdigitisetheiroutlinedifferently.Theoppositewasalsoevidentand SWS were sometimes classified differently while identicaloutlines were digitised by the three analysts. But the mostfrequentproblemwasofthreeanalystsdigitisingdifferentout-lines for the same SWS and classifying them differently.Figure 2 a shows one such set of structures as seen on GoogleEarth: M outlined this set of structures as one SWS and classi-fied it as Group II (Fig. 2b); T outlined eight separate Group IVSWS (Fig. 2c); K also identified eight separate Group IV struc-tures, but digitised their outlines differently from T (Fig. 2d).There is no consistency in one or the other analyst regularlysplitting or lumping more than the others.Howmuchofaneffectdothesevariationsinoutliningandclassifying SWS have on the interpretation of settlementpatterns?SadrandRodier(2012)interpretedtheSWSdistribu-tioninaneighbouringsurveyparcel(Pam1,Fig.1)toshowthatin the earliest phase Group I SWS were generally fairly smallanddispersedthroughoutthesurveyarea.TheseSWSshowedlittle spatial correlation with the distribution of the best arablesoils. Group II SWS were generally larger and indeed somewere huge. These SWS tended to be much more denselyagglomerated and clustered within five kilometres of the bestarable lands. Group III SWS were interpreted as a transitionalphase between Group I and II SWS, showing intermediatestatistics in terms of aggregation/dispersal, clustering near the bestarablelands,andsitesize.GroupIVSWSwerethoughtto bethemostrecentandwereverysmallandhighlyclusteredinan area far from the best arable lands. Sadr and Rodier inter-pretedthissequenceasshowingagradualchangefromamorepastoralist,egalitariansocietyintheearliestphase(GroupI),to FIG.2 .  An example of SWS digitised and classified differently by the three analysts. White scale bar represents 40 m. TABLE2 . Agreement Index: the number of SWS in each group that were so classified by more than one analyst. Group I I % Group II II % Group III III % Group IV IV % UnknownTotal* 119 38 74 246 61Agreed by all 19  16.0  10  26.3  3  4.1  14  5.7   0 by T and K 23  19.3  10  26.3  18  24.3  30  12.2  0 by T and M 40  33.6  13  34.2  5  6.8  15  6.1  0 by K and M 32  26.9  23  60.5  5  6.8  25  10.2  3 *This is the number of SWS in each Group that were so classified by at least one analyst. It does not represent the total of the succeeding four rows.  ahighlystratifiedfarmingsocietyinthethirdphase(GroupII).Group III SWS were interpreted as representing the transi-tionalphasebetweentheseextremesofthesequence.GroupIVSWSwereconsideredtobecontemporarywithorlaterthantheGroup II farming towns, and represented a different cultureand economy in another part of the survey area not orientedtowardsintensivefarming.Aquestiontoaskis:‘towhatextentdo the site size distribution statistics, nearest neighbour indi-ces, aggregation  vs  dispersion, and other relevant patterns insurvey parcel Pam 2 correspond to the patterns described inPam 1?’ More to the point in this paper, ‘do the three analysts’classifications of SWS in Pam 2 produce similar settlementpatterns,allowingsimilarinterpretationsofpastoccupationof this landscape?’ The short answer is yes and no, but it is thedetails that tell us something useful about the classificationsystemandtheproblemsofinter-analystvariability.Below,wetackle the issues one by one, starting with the simpler ones.Despite the very low levels of inter-analyst agreementinSWSclassification,intermsofSWSsizeinPam2thepatternsare in fairly good agreement with the results from Pam 1 asregards Group I and IV SWS, but less so for Groups II and III(Sadr&Rodier2012:1039).AllthreeanalystsworkingonPam2measuredthesmallestmeanSWSareasforGroupIVfollowed by Group I (Table 1); a result which mirrors the patterns inPam1.KproducedameanareaforGroupIISWSthatissignifi-cantly larger than the rest, but both T and M classified thelargest SWS as Group III (Table 1). In Pam 1, the Group II SWShad the largest average area, followed by Group III SWS. Ingeneral,allthreeanalysts’averageareameasurementsinPam2suggestanincreaseinaverageSWSsizethroughtime,withthelargest sites in the middle of the sequence (Groups II and III)and a significant change to small sites in the last phase(GroupIV).ThisisthesameasthegeneralpatternidentifiedinPam 1.Another useful way to compare the settlement patternsproduced by the three analysts in Pam 2 is by looking at themean centre (centroid) of the distribution of SWS in eachgroup. The mean centre identifies a point that is the averageof a distribution of points in space, and by comparing thedistances between the centroids for each analyst’s groupof SWS we are able to measure the difference in the spatialdistribution of each analyst’s group of SWS. The centroids of Group I SWS for T and K are only 1.5 km apart while for TandMtheyare2.7kmapart.AnalystsKandM,however,haveverydifferentdistributionsofGroupISWSandtheircentroidsare4kmapart.GroupIISWSdistributionsshowtheleastvaria-tion between analysts with the distance between the meancentrepointsataminimum.Onceagain,thisisprobablyduetothe fact that Group II SWS are well defined and quite diagnos-tic.Incontrast,probablyduetothefactthatGroupIIISWSareill-defined, the spatial distribution of Group III SWS showsvery little inter-analyst agreement with the distances betweencentroids varying between 3.8 km and 8.7 km.As a further comparison, we can project the one standarddeviation ellipse (1SDE) around these centroids. The 1SDE isexactly what it says: an ellipse which envelopes about twothirds of the SWS nearest to the centroid. For this calculationwe again used the software CrimeStat III, and the results areshowninFig.3.The1SDEofGroupISWSforthethreeanalystsare remarkably similar, suggesting that although there are 4 South African Archaeological Bulletin 69  (199): 00–00, 2014 FIG.3 . The one standard deviation ellipses (1SDE) of the four groups of SWS as identified by the three analysts. Dotted line by analyst K; solid line by analyst T;triangles by analyst M.  South African Archaeological Bulletin 69  (199): 00–00, 2014 5 many discrepancies in how each identified and classified indi-vidual Group I SWS, nevertheless they basically observed thesame general distribution of these sites in space. There is like-wise strong agreement in the 1SDE for Group II SWS, with allthreeanalystsidentifyingthesesitesasclusteredinasmallareain the south of the study area. For the Group IV distributions,the 1SDE of two analysts are close to each other but the thirddoes not agree, while none of the analysts are in agreementregarding the spatial distribution of Group III SWS. As ageneral conclusion, one might say that there is generally lessinter-analyst variability in plotting the distribution of Group Iand II SWS, regardless of the low levels of agreement at theclassification stage (see Table 2 above). But why do Group IIIand IV SWS show more variability in the 1SDE of their spatialdistribution among the three analysts? We assume the typesare not well-defined.Toexplorethislastpointwelookatthedegreeofclustering  versus  dispersal of SWS within these spatial distributions. Thenearest neighbour index (the ratio of the observed distance betweenneighboursintheactualdistributionofSWS,divided by the average distance between neighbours in a hypotheticalrandom distribution) is one way to examine this issue. Theobservedmeandistancetothenearestneighbourwasobtained by measuring the distance between the centre points of SWSusing the nearest neighbour analysis tool in QGIS. If thenearest neighbour index is less than 1 the pattern exhibitsmore clustering than random, while a number greater than 1suggests more dispersion. This index indicated in Pam 1 achronological trend towards more SWS aggregation throughtime (Sadr & Rodier 2012: 1038). In Pam 2, the nearest neigh- bourindicesobtainedforanalystsTandKindicatethehighestlevel of clustering (smallest number on the nearest neighbourindex) for Group IV SWS and the most dispersed distribution(highest number on the nearest neighbour index) for Group ISWS. This is the same pattern as observed in Pam 1. T and K agree that Group II and III SWS are between these extremes of clustering and dispersal, but they disagree on the relativeclustering of Group II  versus  Group III SWS. Analyst M is indisagreement with T and K on practically all aspects of thenearestneighbourindex.Overall,theresultsinTable1indicatethat the different analysts’ classifications of SWS in Pam 2produce different patterns of settlement clustering and dis-persal.Buttwooftheanalystscoulddrawfairlysimilarconclu-sions regarding the change though time in patterns of SWSaggregation and dispersal, and both would be in fairly closeagreement with the patterns recorded in Pam 1.A better comparison of the three analysts’ view of settle-ment patterns may be obtained through an analysis of hot-spots. A hotspot in this case is the place on a map with highnumbers of SWS classified into the same Group. For thismeasurement we once again used CrimeStat III and set a500-metre radius around the centre point of each SWS as thethreshold for analysis. In Figs 4–7, each circle is centred onan SWS and the size of the circle represents how manyother SWS fall within its 500 m radius; so the larger the circle,the more SWS occur within 500 m of its centre point. Visually,thelargerthecircle,thehotteristhatspotintermsofclusteredSWS.Thisdeviceprovidesaquickvisualclueastowhetherthedifferentanalystsareidentifyingsimilarhotspotsforeachtypeeven though they might be disagreeing on the details of SWS FIG.4 .  The hotspots for Group I SWS.
Related Search
Similar documents
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks