Juvenile Offending (records)

Brief Description: This is a measure of juvenile offending based on information gathered from juvenile justice agencies on arrests, offenses, and sentencing.

Rationale: Evaluators frequently rely on youth self-reports of delinquent behavior and arrests. However, some youth under-report delinquent behavior or misreport the nature of their offense (Thornberry & Khron, 2000), perhaps due to consequences that could arise from their disclosure or a failure to recall these behaviors. In addition, some groups of youth tend to over-report their arrests, whereas others (i.e., those with more arrests) tend to under-report arrests (Kirk, 2006; Krohn, Lizotte, Phillips, Thornberry, & Bell, 2013). For this reason, collecting records of offending is desirable when feasible.  

Cautions: Be sure to set aside adequate time and staff resources to collect records on juvenile offending. Accessing juvenile justice records often requires extensive planning and negotiation with the state (e.g., Office of Juvenile Justice, Department of Corrections) and/or reporting agency or agencies (e.g., juvenile court, family court), particularly around issues of data privacy and confidentiality. The juvenile justice system is complex in structure and process, including jurisdictional variations and ongoing reforms and other changes over time. See flow chart here for an overview. It is also important to note that an arrest should not be interpreted as an indication that a youth necessarily committed the offense with which they were charged and that the behavior of certain groups is more likely to result in an arrest, charge, and conviction than the behavior of individuals from other groups (Developmental Services Group, 2014; Huizinga et al. 2007, Kakade et al. 2012). 

Access and permissions: Access to juvenile offending records typically involves strict confidentiality conditions. Organizations wanting access to these records typically will be required to complete a formal written application, part of which will be detailing the specific information being requested. If the request is approved, it should be expected that written parent permission and potentially also youth consent or assent (depending on the age of the youth) will be required prior to release of any data that identifies individual youth (i.e., is not deidentified). Consideration also should be given to collecting the youth’s social security number with parent permission because agencies granting access to juvenile offending records often require it to facilitate identification of youth within available records. Also, consider budgeting funds to reimburse time for agency staff to gather these data. 

What to Collect: A formatted data collection guide of the types of data that programs might consider requesting directly from an agency can be found here. 

How to Collect:

Sources: It is possible that the agency providing juvenile justice records could vary across different jurisdictions or that these records reside in multiple agencies in the same jurisdiction. Programs that serve youth in more than one geographical area should anticipate potentially needing to work with a range of differing types of agencies (e.g., office of juvenile justice, juvenile court, Department of Corrections) or with agencies that have different names in different jurisdictions but serve the same function (e.g., juvenile court, family court), and each of these agencies may have slightly different requirements for access. Establishing contact with administrators at the agencies from which there is a plan to collect juvenile justice records data prior to an evaluation is critical. This will clarify the documentation needed (e.g., it may be necessary to develop permission/consent forms that include very specific information) and potential barriers to collecting the data. Depending on the nature of the evaluation and the jurisdiction’s policies, the agency may agree to provide only “deidentified” data (i.e., data that does not include youth names or other identifying information). If so, it is advisable in the data request to attach information to each youth’s name, such as basic demographics (gender or race/ethnicity) or program participation status so that the data once obtained (with this information attached to each line of data, but with the youth’s name removed) will allow you to use this information in analyses. Care must be taken, however, to ensure this type of attached information does not allow a youth to be inadvertently identified; a general rule of thumb is to ensure that the data once obtained do not include subgroups (e.g., male Native American youth) of fewer than 10 youth.

Additional Considerations: If there is  interest in assessing change over time in outcomes, the time period for which data are requested should be specified accordingly along with a request for information (e.g., dates of arrest) that will allow for the desired type of analysis (e.g., before and after program participation). It is also advisable to consider requesting data not only for the entire period of a youth’s program participation, but also a period of time after participation has ended as this information can be helpful for evaluating possible longer-term effects of program involvement. Care also should be taken to account for possible variation in the time period for which data should be requested for different youth, such as in cases in which youth are enrolling in a program at different points in time. If possible, consideration also should be given to collecting similar records for a comparable group of youth not participating in the mentoring program. These data can be used to compare outcomes for program and non-program participants, which is a more robust evaluation design than simply looking at changes over the course of program involvement for program participants (for further discussion of evaluation design considerations, see the Evaluation Guidance and Resources section of this Toolkit).

How to Analyze:

Format: It is important to work closely with reporting agencies to interpret differences in juvenile justice terminology and offense classifications across agencies. It may also be necessary to review key terms (e.g., informed by an interpretation guide if available) prior to collecting or “coding” this information (i.e., translating values or categories into numbers that can be analyzed). A list of some of these terms can be found here. Additionally, records data on arrests, juvenile offenses, and sentencing may be presented in many different formats across agencies. For example, some agencies may provide an Excel table with columns containing each piece of information requested, whereas others may provide photocopied case files with the information in a narrative form across multiple documents, in which case, it will be necessary to read through these descriptions to pull out the information needed.

Scoring: How the collected information is coded or “scored” will depend on your specific aims. For example, a program might be interested in a broad count of arrests. In this case, it may be sufficient to simply add up the total number of arrests reported. Similarly, if there is interest only in whether the youth was arrested, a simple “yes/no” indicator in which a score of 1 reflects at least one arrest and a score of 0 reflects no arrest can be used. However, not all arrests involve offenses of equal severity. Distinctions that may be useful to consider include whether an offense involves violence and whether an offense is person-related (e.g., robbery) or property-related (e.g., vandalism). As detailed in the data collection guide (here), for example, the Uniform Crime Reporting Program of the Federal Bureau of Investigation distinguishes between a set of "index" crimes (e.g., aggravated assault, burglary) and other types of offenses (e.g., disorderly conduct), the former being further subdivided into violent and property crimes, respectively. Considering such distinctions (e.g., total number of violence-related arrests or an indication of whether a youth was arrested for any person-related offense) may provide a more nuanced understanding of a program’s effects or the needs of the youth that it serves.

How to interpret findings: When using frequency counts or “presence or absence” of arrests or offenses in evaluating a program’s possible effects, results for a group of youth (e.g., those participating in a program) can be expressed as the average number of arrests per youth or as the percentage of youth with one or more arrests in that group within a given time period.  Programs interested in gauging their effectiveness will generally be interested in seeing declines in these numbers during or after youth’s participation in the program. However, such change could occur for reasons unrelated to program participation (e.g., initiation of local reforms such as pre-arrest diversion programs); conversely, an absence of change, and even an increase in arrests might be observed due to non-program factors such as a developmental trend toward greater involvement in delinquent behavior with age. In the absence of an appropriate comparison group, findings should never be interpreted as being indicative of program effectiveness or lack thereof (for further discussion, see the Evaluation Guidance and Resources section of this Toolkit).

Alternatives: Although youth self-reports of arrests may not be as accurate or precise as data collected from official records, youth often recall arrest information with good accuracy, particularly when arrests are few in number (Thornberry & Krohn, 2000). In addition, although some youth may be reluctant to report delinquent behavior, official arrest records are also limited in that they do not capture delinquent acts for which youth are not caught and arrested. For this reason, youth self-reports of arrests and/or delinquent activity could be a good alternative. 


Citations:

Development Services Group, Inc. (2014). Disproportionate minority contact. Washington, DC: Office of Juvenile Justice and Delinquency Prevention. Prepared by Development Services Group, Inc., under cooperative agreement number 2013–JF–FX–K002. Points of view or opinions expressed in this document are those of the author and do not necessarily represent the official position or policies of OJJDP or the U.S. Department of Justice. Available at: https://www.ojjdp.gov/mpg/litreviews/Disproportionate_Minority_contact.pdf

Huizinga, D, Thornberry, T. P., Knight, K. E., Lovegrove, R. L., Hill, K., Farrington, D. P. (2007). Disproportionate Minority Contact in the Juvenile Justice System: A Study of Differential Minority Arrest/Referral to Court in Three Cities. Washington, DC: Office of Juvenile Justice and Delinquency Prevention.

Kakade, M., Duarte, C. S., Liu, X., Fuller, C. J., Drucker, E., Hoven, C. W., . . . Wu, P. (2012). Adolescent substance use and other illegal behaviors and racial disparities in criminal justice system involvement: Findings from a US national survey. American Journal of Public Health, 102, 1307–1310. https://doi.org/10.2105/AJPH.2012.300699

Kirk, D. S. (2006). Examining the divergence across self-report and official data sources of inferences about the adolescent life-course of crime. Journal of Quantitative Criminology, 22, 107-129. https://doi.org/10.1007/s10940-006-9004-0

Krohn, M. D., Lizotte, A. J., Phillips, M. D., Thornberry, T. P., & Bell, K. A. (2013). Explaining systematic bias in self-reported measures: Factors that affect the under- and over-reporting of self-reported arrests. Justice Quarterly, 30, 501-528. https://doi.org/10.1080/07418825.2011.606226

Thornberry, T. P., & Krohn, M. D. (2000). The self-report method of measuring delinquency and crime. In D. Duffee (Ed.), Criminal justice 2000 (pp. 33-84). Washington, DC; U.S. Department of Justice, the National Institute of Justice. National Institute of Justice.

More in this category: « School Discipline (records)

Request no-cost help for your program

Advanced Search