• Tidak ada hasil yang ditemukan

Tugas tik di kelas xi ips 3

N/A
N/A
Protected

Academic year: 2017

Membagikan "Tugas tik di kelas xi ips 3"

Copied!
725
0
0

Teks penuh

(1)

TUGAS

TIK

Ketua : wandi mulyana

Nama anggota : lina rahayu

Taufik wahyudin

Luki lukmanul hakim

(2)

Fauzi

Yandi mulyadi

cryptography

In cryptography, an adversary (rarely opponent, enemy) is a malicious entity whose aim is to prevent the users of the cryptosystem from achieving their goal (primarily privacy, integrity, and availability of data). An adversary's efforts might take the form of attempting to discover secret data, corrupting some of the data in the system, spoofing the identity of a message sender or receiver, or forcing system downtime. Actual adversaries, as opposed to idealized ones, are referred to as attackers. Not surprisingly, the former term predominates in the cryptographic and the latter in the computer security literature. Eve, Mallory, Oscar and Trudy are all adversarial characters widely used in both types of texts. This notion of an adversary helps both intuitive and formal reasoning about cryptosystems by casting security analysis of

cryptosystems as a 'game' between the users and a centrally co-ordinated enemy. The notion of security of a cryptosystem is meaningful only with respect to particular attacks (usually presumed to be carried out by particular sorts of adversaries). There are several types of adversaries depending on what capabilities or intentions they are presumed to have. Adversaries may be[1] computationally bounded or unbounded (i.e. in terms of time and storage resources),eavesdropping or Byzantine (i.e. passively listening on or actively corrupting data in the channel), static or adaptive (i.e. having fixed or changing behavior),mobile or non-mobile (e.g. in the context of network security)and so on. In actual security practice, the attacks assigned to such adversaries are often seen, so such notional analysis is not merely theoretical.How successful an adversary is at breaking a system is measured by its advantage. An adversary's advantage is the difference between the adversary's probability of breaking the system and the probability that the system can be broken by simply guessing. The advantage is specified as a function of the security parameter.

In telecommunications, a communications protocol is a system of rules that allow two or more entities of a communications system to transmit

(3)

communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both.[1]

Communicating systems use well-defined formats (protocol) for exchanging messages. Each message has an exact meaning intended to elicit a response from a range of possible responses pre-determined for that particular situation. The specified behavior is typically independent of how it is to be implemented. Communications protocols have to be agreed upon by the parties involved.[2] To reach agreement, a protocol may be developed into a technical standard. A programming language describes the same for computations, so there is a close analogy between protocols and programming languages: protocols are to

communications as programming languages are to computations.[3] Information security, sometimes shortened to InfoSec, is the practice of defending

information from unauthorized access, use, disclosure, disruption, modification, perusal, inspection, recording or destruction. It is a general term that can be used regardless of the form the data may take (e.g. electronic, physical).[1] For the information security attribute CIA (confidentiality, integrity, and availability), see Information security. "Confidential" redirects here. For other uses, see

Confidential (disambiguation). Confidentiality is a set of rules or a promise that limits access or places restrictions on certain types of information. Contents,, 1 Legal confidentiality, 1.1 History of the English law about confidentiality, 2 Medical confidentiality, 3 Clinical and counseling psychology, 4 See also, 5 References. Legal confidentiality Main article: Privacy law Lawyers are often required by law to keep confidential anything pertaining to the representation of a client. The duty of confidentiality is much broader than the attorney–client evidentiary privilege, which only covers communications between the attorney and the client. Both the privilege and the duty serve the purpose of encouraging clients to speak frankly about their cases. This way, lawyers will be able to carry out their duty to provide clients with zealous representation. Otherwise, the opposing side may be able to surprise the lawyer in court with something which he did not know about his client, which may weaken the client's position. Also, a distrustful client might hide a relevant fact which he thinks is incriminating, but which a skilled lawyer could turn to the client's advantage (for example, by raising affirmative defenses like self-defense) However, most jurisdictions have exceptions for situations where the lawyer has reason to believe that the client may kill or seriously injure someone, may cause substantial injury to the financial interest or property of another, or is using (or seeking to use) the lawyer's

services to perpetrate a crime or fraud.

In such situations the lawyer has the discretion, but not the obligation, to disclose information designed to prevent the planned action. Most states have a version of this discretionary disclosure rule under Rules of Professional

Conduct, Rule 1.6 (or its equivalent). A few jurisdictions have made this

(4)

extreme cases where murderers have confessed the location of missing bodies to their lawyers but the police are still looking for those bodies. The U.S. Supreme Court and many state supreme courts have affirmed the right of a lawyer to withhold information in such situations. Otherwise, it would be impossible for any criminal defendant to obtain a zealous defense. California is famous for having one of the strongest duties of confidentiality in the world; its lawyers must protect client confidences at "every peril to himself [or herself]" under former California Business and Professions Code section 6068(e). Until an amendment in 2004 (which turned subsection (e) into subsection (e)(1) and added subsection (e)(2) to section 6068), California lawyers were not even permitted to disclose that a client was about to commit murder or assault. The Supreme Court of California promptly amended the California Rules of Professional Conduct to conform to the new exception in the revised statute. Recent legislation in the UK curtails the confidentiality professionals like lawyers and accountants can

maintain at the expense of the state.[citation needed] Accountants, for example, are required to disclose to the state any suspicions of fraudulent accounting and, even, the legitimate use of tax saving schemes if those schemes are not already known to the tax authorities.

History of the English law about confidentiality The modern English law of confidence stems from the judgment of the Lord Chancellor, Lord Cottenham,[1] in which he restrained the defendant from publishing a catalogue of private etchings made by Queen Victoria and Prince Albert (Prince Albert v Strange). However, the jurisprudential basis of confidentiality remained largely

unexamined until the case of Saltman Engineering Co. Ltd. v Campbell

Engineering Co. Ltd.,[2] in which the Court of Appeal upheld the existence of an equitable doctrine of confidence, independent of contract. In Coco v A.N.Clark (Engineers) Ltd [1969] R.P.C. 41, Megarry J developed an influential tri-partite analysis of the essential ingredients of the cause of action for breach of

confidence: the information must be confidential in quality,[3] and nature.[4][5] it must be imparted so as to import an obligation of confidence,[6][7] and there must be an unauthorised use[8][9] of that information resulting in the

detriment[10] of the party communicating it.[11] The law in its then current state of development was authoritatively summarised by Lord Goff in the Spycatcher case.[12] He identified three qualifications limiting the broad general principle that a duty of confidence arose when confidential information came to the

(5)

The incorporation into domestic law of Article 8 of the European

Convention on Human Rights by the Human Rights Act 1998 has since had a profound effect on the development of the English law of confidentiality. Article 8 provides that everyone has the right to respect for his private and family life, his home and his correspondence. In Campbell v MGN Ltd,[13] the House of Lords held that the Daily Mirror had breached Naomi Campbell’s confidentiality rights by publishing reports and pictures of her attendance at Narcotics Anonymous meetings. Although their lordships were divided 3–2 as to the result of the appeal and adopted slightly different formulations of the applicable principles, there was broad agreement that, in confidentiality cases involving issues of privacy, the focus shifted from the nature of the relationship between claimant and defendant to (a) an examination of the nature of the information itself and (b) a balancing exercise between the claimant's rights under Article 8 and the defendant's competing rights (for example, under Article 10, to free speech). It presently remains unclear to what extent and how this judge-led development of a partial law of privacy will impact on the equitable principles of confidentiality as

traditionally understood. Medical confidentiality It has been suggested that HIV confidentiality be merged into this article. (Discuss) Proposed since January 2015. Confidentiality is commonly applied to conversations between doctors and patients. Legal protections prevent physicians from revealing certain discussions with patients, even under oath in court.[14] This physician-patient privilege only applies to secrets shared between physician and patient during the course of providing medical care.[14] The rule dates back to at least the Hippocratic Oath, which reads: Whatever, in connection with my professional service, or not in connection with it, I see or hear, in the life of men, which ought not to be spoken of abroad, I will not divulge, as reckoning that all such should be kept secret. Traditionally, medical ethics has viewed the duty of confidentiality as a relatively non-negotiable tenet of medical practice. In the UK information about an

individual's HIV status is kept confidential within the NHS. This is based in law, in the NHS Constitution and in key NHS rules and procedures. It is also outlined in every NHS employee’s contract of employment and in professional standards set by regulatory bodies. The National AIDS Trust's Confidentiality in the NHS: Your Information, Your Rights[15] outlines these rights. However, there are a few limited instances when a healthcare worker can share personal information without consent if it is in the public interest. These instances are set out in guidance from the General Medical Council[16] which is the regulatory body for doctors. Sometimes the healthcare worker has to provide the information - if required by law or in response to a court order.

Confidentiality is mandated in America by HIPAA laws, specifically the Privacy Rule, and various state laws, some more rigorous than HIPAA. However, numerous exceptions to the rules have been carved out over the years. For example, many American states require physicians to report gunshot wounds to the police and impaired drivers to the Department of Motor Vehicles.

(6)

and in the termination of a pregnancy in an underage patient, without the

knowledge of the patient's parents. Many states in the U.S. have laws governing parental notification in underage abortion.[17] Clinical and counseling

psychology The ethical principle of confidentiality requires that information shared by the client with the therapist in the course of treatment is not shared with others. This is important for the therapeutic alliance, as it promotes an environment of trust. There are important exceptions to confidentiality, namely where it conflicts with the clinician's duty to warn or duty to protect. This

includes instances of suicidal behavior or homicidal plans, child abuse, elder abuse and dependent adult abuse. [18] On 26 June 2012, a judge of Oslo District Court apologized for the court's hearing of testimony (on 14 June, regarding contact with Child Welfare Services (Norway)) that was covered by confidentiality (that had not been waived at that point of the trial of Anders Behring Breivik). [19] Data integrity refers to maintaining and assuring the accuracy and consistency of data over its entire life-cycle,[1] and is a critical aspect to the design, implementation and usage of any system which stores, processes, or retrieves data. The term data integrity is broad in scope and may have widely different meanings depending on the specific context – even under the same general umbrella of computing. This article provides only a broad overview of some of the different types and concerns of data integrity.

Data integrity is the opposite of data corruption, which is a form of data loss. The overall intent of any data integrity technique is the same: ensure data is recorded exactly as intended (such as a database correctly rejecting mutually exclusive possibilities,) and upon later retrieval, ensure the data is the same as it was when it was originally recorded. In short, data integrity aims to prevent unintentional changes to information. Data integrity is not to be confused with data security, the discipline of protecting data from unauthorized parties. Any unintended changes to data as the result of a storage, retrieval or processing operation, including malicious intent, unexpected hardware failure, and human error, is failure of data integrity. If the changes are the result of unauthorized access, it may also be a failure of data security. Depending on the data involved this could manifest itself as benign as a single pixel in an image appearing a different color than was originally recorded, to the loss of vacation pictures or a business-critical database, to even catastrophic loss of human life in a life-critical system. Contents 1 Integrity types 1.1 Physical integrity 1.2 Logical integrity 2 Databases 2.1 Types of integrity constraints 2.2 Examples 3 File systems 4 Data storage 5 See also 6 References 7 Further readin Integrity types Physical

integrity, Physical integrity deals with challenges associated with correctly storing and fetching the data itself. Challenges with physical integrity may include electromechanical faults, design flaws, material fatigue, corrosion, power outages, natural disasters, acts of war and terrorism, and other special

(7)

system, using file systems that employ block level checksums such as ZFS, storage arrays that compute parity calculations such as exclusive or or use a cryptographic hash function and even having a watchdog timer on critical subsystems.

Physical integrity often makes extensive use of error detecting algorithms known as error-correcting codes. Human-induced data integrity errors are often detected through the use of simpler checks and algorithms, such as the Damm algorithm or Luhn algorithm. These are used to maintain data integrity after manual transcription from one computer system to another by a human intermediary (e.g. credit card or bank routing numbers). Computer-induced transcription errors can be detected through hash functions. In production systems these techniques are used together to ensure various degrees of data integrity. For example, a computer file system may be configured on a fault-tolerant RAID array, but might not provide block-level checksums to detect and prevent silent data corruption. As another example, a database management system might be compliant with the ACID properties, but the RAID controller or hard disk drive's internal write cache might not be Logical integrity See also: Mutex and Copy-on-write This type of integrity is concerned with the correctness or rationality of a piece of data, given a particular context. This includes topics such as referential integrity and entity integrity in a relational database or correctly ignoring impossible sensor data in robotic systems. These concerns involve ensuring that the data "makes sense" given its environment. Challenges include software bugs, design flaws, and human errors. Common methods of ensuring logical integrity include things such as a check constraints, foreign key constraints, program assertions, and other run-time sanity checks. Both physical and logical integrity often share many common challenges such as human errors and design flaws, and both must appropriately deal with concurrent requests to record and retrieve data, the latter of which is entirely a subject on its own. Databases

Data integrity contains guidelines for data retention, specifying or

guaranteeing the length of time data can be retained in a particular database. It specifies what can be done with data values when their validity or usefulness expires. In order to achieve data integrity, these rules are consistently and routinely applied to all data entering the system, and any relaxation of

(8)

checks and correction for invalid data, based on a fixed schema or a predefined set of rules. An example being textual data entered where a date-time value is required. Rules for data derivation are also applicable, specifying how a data value is derived based on algorithm, contributors and conditions. It also specifies the conditions on how the data value could be re-derived. Types of integrity constraints Data integrity is normally enforced in a database system by a series of integrity constraints or rules. Three types of integrity constraints are an inherent part of the relational data model: entity integrity, referential integrity and domain integrity:

Entity integrity concerns the concept of a primary key. Entity integrity is an integrity rule which states that every table must have a primary key and that the column or columns chosen to be the primary key should be unique and not null. Referential integrity concerns the concept of a foreign key. The referential integrity rule states that any foreign-key value can only be in one of two states. The usual state of affairs is that the foreign-key value refers to a primary key value of some table in the database. Occasionally, and this will depend on the rules of the data owner, a foreign-key value can be null. In this case we are explicitly saying that either there is no relationship between the objects represented in the database or that this relationship is unknown. Domain integrity specifies that all columns in a relational database must be declared upon a defined domain. The primary unit of data in the relational data model is the data item. Such data items are said to be non-decomposable or atomic. A domain is a set of values of the same type. Domains are therefore pools of values from which actual values appearing in the columns of a table are drawn. User-defined integrity refers to a set of rules specified by a user, which do not belong to the entity, domain and referential integrity categories. If a database supports these features it is the responsibility of the database to insure data integrity as well as the consistency model for the data storage and retrieval. If a database does not support these features it is the responsibility of the

applications to ensure data integrity while the database supports the consistency model for the data storage and retrieval. Having a single, well-controlled, and well-defined data-integrity system increases stability (one centralized system performs all data integrity operations) performance (all data integrity operations are performed in the same tier as the consistency model) re-usability (all

applications benefit from a single centralized data integrity system)

maintainability (one centralized system for all data integrity administration).

As of 2012, since all modern databases support these features (see Comparison of relational database management systems), it has become the de facto responsibility of the database to ensure data integrity. Out-dated and legacy systems that use file systems (text, spreadsheets, ISAM, flat files, etc.) for their consistency model lack any[citation needed] kind of data-integrity model. This requires organizations to invest a large amount of time, money and

(9)

databases. Many companies, and indeed many database systems themselves, offer products and services to migrate out-dated and legacy systems to modern databases to provide these data-integrity features. This offers organizations substantial savings in time, money and resources because they do not have to develop per-application data-integrity systems that must be refactored each time the business requirements change. Examples An example of a data-integrity mechanism is the parent-and-child relationship of related records. If a parent record owns one or more related child records all of the referential integrity processes are handled by the database itself, which automatically ensures the accuracy and integrity of the data so that no child record can exist without a parent (also called being orphaned) and that no parent loses their child records. It also ensures that no parent record can be deleted while the parent record owns any child records. All of this is handled at the database level and does not require coding integrity checks into each applications. File systems Various research results show that neither widespread filesystems (including UFS, Ext, XFS, JFS and NTFS) nor hardware RAID solutions provide sufficient protection against data integrity problems.[2][3][4][5][6] Some filesystems (including Btrfs and ZFS) provide internal data and metadata checksumming, what is used for detecting silent data corruption and improving data integrity. If a corruption is detected that way and internal RAID mechanisms provided by those filesystems are also used, such filesystems can additionally reconstruct corrupted data in a

transparent way.[7] This approach allows improved data integrity protection covering the entire data paths, which is usually known as end-to-end data protection.[8] Data storage Main article: Data Integrity Field Apart from data in databases, standards exist to address the integrity of data on storage devices.[9] Methods Main article: Provenance

Authentication has relevance to multiple fields. In art, antiques, and anthropology, a common problem is verifying that a given artifact was produced by a certain person or was produced in a certain place or period of history. In computer science, verifying a person's identity is often required to secure access to confidential data or systems. Authentication can be considered to be of three types: The first type of authentication is accepting proof of identity given by a credible person who has first-hand evidence that the identity is genuine. When authentication is required of art or physical objects, this proof could be a friend, family member or colleague attesting to the item's provenance, perhaps by having witnessed the item in its creator's possession. With autographed sports memorabilia, this could involve someone attesting that they witnessed the object being signed. A vendor selling branded items implies authenticity, while he or she may not have evidence that every step in the supply chain was

authenticated. Authority based trust relationships (centralized) drive the majority of secured internet communication through known public certificate authorities; peer based trust (decentralized, web of trust) is used for personal services like email or files (pretty good privacy, GNU Privacy Guard) and trust is established by known individuals signing each other's keys (proof of identity) or at Key

(10)

attributes of the object itself to what is known about objects of that origin. For example, an art expert might look for similarities in the style of painting, check the location and form of a signature, or compare the object to an old photograph. An archaeologist might use carbon dating to verify the age of an artifact, do a chemical analysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to examine the authenticity of audio recordings, photographs, or videos. Documents can be verified as being created on ink or paper readily available at the time of the item's implied creation.

Attribute comparison may be vulnerable to forgery. In general, it relies on the facts that creating a forgery indistinguishable from a genuine artifact

requires expert knowledge, that mistakes are easily made, and that the amount of effort required to do so is considerably greater than the amount of profit that can be gained from the forgery. In art and antiques, certificates are of great importance for authenticating an object of interest and value. Certificates can, however, also be forged, and the authentication of these poses a problem. For instance, the son of Han van Meegeren, the well-known art-forger, forged the work of his father and provided a certificate for its provenance as well; see the article Jacques van Meegeren. Criminal and civil penalties for fraud, forgery, and counterfeiting can reduce the incentive for falsification, depending on the risk of getting caught. Currency and other financial instruments commonly use this second type of authentication method. Bills, coins, and cheques incorporate hard-to-duplicate physical features, such as fine printing or engraving, distinctive feel, watermarks, and holographic imagery, which are easy for trained receivers to verify. The third type of authentication relies on documentation or other external affirmations. In criminal courts, the rules of evidence often require establishing the chain of custody of evidence presented. This can be

accomplished through a written evidence log, or by testimony from the police detectives and forensics staff that handled it. Some antiques are accompanied by certificates attesting to their authenticity. Signed sports memorabilia is

usually accompanied by a certificate of authenticity. These external records have their own problems of forgery and perjury, and are also vulnerable to being separated from the artifact and lost. In computer science, a user can be given access to secure systems based on user credentials that imply authenticity. A network administrator can give a user a password, or provide the user with a key card or other access device to allow system access. In this case, authenticity is implied but not guaranteed.

(11)

store implicitly attests to it being genuine, the first type of authentication. The second type of authentication might involve comparing the quality and

craftsmanship of an item, such as an expensive handbag, to genuine articles. The third type of authentication could be the presence of a trademark on the item, which is a legally protected marking, or any other identifying feature which aids consumers in the identification of genuine brand-name goods. With

software, companies have taken great steps to protect from counterfeiters, including adding holograms, security rings, security threads and color shifting ink.[1] Factors and identity The ways in which someone may be authenticated fall into three categories, based on what are known as the factors of

authentication: something the user knows, something the user has, and something the user is. Each authentication factor covers a range of elements used to authenticate or verify a person's identity prior to being granted access, approving a transaction request, signing a document or other work product, granting authority to others, and establishing a chain of authority. Security research has determined that for a positive authentication, elements from at least two, and preferably all three, factors should be verified.[2] The three factors (classes) and some of elements of each factor are: This is a picture of the front (top) and back (bottom) of an ID Card. the knowledge factors: Something the user knows (e.g., a password, pass phrase, or personal identification number (PIN), challenge response (the user must answer a question, or pattern) the ownership factors: Something the user has (e.g., wrist band, ID card, security token, cell phone with built-in hardware token, software token, or cell phone holding a software token) the inherence factors: Something the user is or does (e.g., fingerprint, retinal pattern, DNA sequence (there are assorted definitions of what is sufficient), signature, face, voice, unique bio-electric signals, or other biometric identifier). Two-factor authentication Main article: Two-factor

authentication

When elements representing two factors are required for

authentication, the term two-factor authentication is applied — e.g. a bankcard (something the user has) and a PIN (something the user knows). Business networks may require users to provide a password (knowledge factor) and a pseudorandom number from a security token (ownership factor). Access to a very-high-security system might require a mantrap screening of height, weight, facial, and fingerprint checks (several inherence factor elements) plus a PIN and a day code (knowledge factor elements), but this is still a two-factor

authentication. Product authentication A Security hologram label on an electronics box for authentication Counterfeit products are often offered to consumers as being authentic. Counterfeit consumer goods such as electronics, music, apparel, and Counterfeit medications have been sold as being legitimate. Efforts to control the supply chain and educate consumers help ensure that authentic products are sold and used. Even security printing on packages, labels, and nameplates, however, is subject to counterfeiting. A secure key storage device can be used for authentication in consumer electronics, network

(12)

connection to either a host system or a network. Nonetheless, the component being authenticated need not be electronic in nature as an authentication chip can be mechanically attached and read through a connector to the host e.g. an authenticated ink tank for use with a printer. For products and services that these Secure Coprocessors can be applied to, they can offer a solution that can be much more difficult to counterfeit than most other options while at the same time being more easily verified. Packaging, Packaging and labeling can be engineered to help reduce the risks of counterfeit consumer goods or the theft and resale of products.[3][4] Some package constructions are more difficult to copy and some have pilfer indicating seals. Counterfeit goods, unauthorized sales (diversion), material substitution and tampering can all be reduced with these anti-counterfeiting technologies. Packages may include authentication seals and use security printing to help indicate that the package and contents are not counterfeit; these too are subject to counterfeiting. Packages also can include anti-theft devices, such as dye-packs, RFID tags, or electronic article surveillance[5] tags that can be activated or detected by devices at exit points and require specialized tools to deactivate. Anti-counterfeiting technologies that can be used with packaging include:

Taggant fingerprinting - uniquely coded microscopic materials that are verified from a database. Encrypted micro-particles - unpredictably placed

markings (numbers, layers and colors) not visible to the human eye Holograms - graphics printed on seals, patches, foils or labels and used at point of sale for visual verification Micro-printing - second line authentication often used on currencies Serialized barcodes UV printing - marks only visible under UV light Track and trace systems - use codes to link products to database tracking system Water indicators - become visible when contacted with water DNA tracking - genes embedded onto labels that can be traced Color shifting ink or film - visible marks that switch colors or texture when tilted Tamper evident seals and tapes - destructible or graphically verifiable at point of sale 2d barcodes - data codes that can be tracked RFID chips Information content. The authentication of information can pose special problems with electronic communication, such as vulnerability to man-in-the-middle attacks, whereby a third party taps into the communication stream, and poses as each of the two other communicating parties, in order to intercept information from each. Extra identity factors can be required to authenticate each party's identity. Literary forgery can involve

imitating the style of a famous author. If an original manuscript, typewritten text, or recording is available, then the medium itself (or its packaging — anything from a box to e-mail headers) can help prove or disprove the authenticity of the document. However, text, audio, and video can be copied into new media, possibly leaving only the informational content itself to use in authentication. Various systems have been invented to allow authors to provide a means for readers to reliably authenticate that a given message originated from or was relayed by them. These involve authentication factors like: A difficult-to-reproduce physical artifact, such as a seal, signature, watermark, special

(13)

of the message. An electronic signature; public-key infrastructure is often used to cryptographically guarantee that a message has been signed by the holder of a particular private key.The opposite problem is detection of plagiarism, where information from a different author is passed off as a person's own work. A

common technique for proving plagiarism is the discovery of another copy of the same or very similar text, which has different attribution. In some cases,

excessively high quality or a style mismatch may raise suspicion of plagiarism.

Factual verification Determining the truth or factual accuracy of information in a message is generally considered a separate problem from

authentication. A wide range of techniques, from detective work, to fact checking in journalism, to scientific experiment might be employed. Video authentication It is sometimes necessary to authenticate the veracity of video recordings used as evidence in judicial proceedings. Proper chain-of-custody records and secure storage facilities can help ensure the admissibility of digital or analog recordings by the Court. Literacy & Literature authentication In literacy, authentication is a readers’ process of questioning the veracity of an aspect of literature and then verifying those questions via research. The fundamental question for

authentication of literature is - Do you believe it? Related to that, an

authentication project is therefore a reading and writing activity which students documents the relevant research process ([6]). It builds students' critical literacy. The documentation materials for literature go beyond narrative texts and likely include informational texts, primary sources, and multimedia. The process typically involves both internet and hands-on library research. When

authenticating historical fiction in particular, readers considers the extent that the major historical events, as well as the culture portrayed (e.g., the language, clothing, food, gender roles), are believable for the time period. ([7]). History and state-of-the-art Historically, fingerprints have been used as the most

authoritative method of authentication, but recent court cases in the US and elsewhere have raised fundamental doubts about fingerprint reliability.[citation needed] Outside of the legal system as well, fingerprints have been shown to be easily spoofable, with British Telecom's top computer-security official noting that "few" fingerprint readers have not already been tricked by one spoof or another. [8] Hybrid or two-tiered authentication methods offer a compelling solution, such as private keys encrypted by fingerprint inside of a USB device. In a computer data context, cryptographic methods have been developed (see digital signature and challenge-response authentication) which are currently not spoofable if and only if the originator's key has not been compromised. That the originator (or anyone other than an attacker) knows (or doesn't know) about a compromise is irrelevant. It is not known whether these cryptographically based authentication methods are provably secure, since unanticipated mathematical developments may make them vulnerable to attack in future. If that were to occur, it may call into question much of the authentication in the past. In particular, a digitally signed contract may be questioned when a new attack on the cryptography underlying the signature is discovered.

(14)

approach relying on two or more authenticators to establish the identity of an originator or receiver of information. The above definition is consistent with that of the European Central Bank, as discussed in the strong authentication entry. Authorization Main article: Authorization A soldier checks a driver's identification card before allowing her to enter a military base. The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that "you are who you say you are", authorization is the process of verifying that "you are permitted to do what you are trying to do". Authorization thus presupposes authentication. For example, a client showing proper

identification credentials to a bank teller is asking to be authenticated that he really is the one whose identification he is showing. A client whose

authentication request is approved becomes authorized to access the accounts of that account holder, but no others. However note that if a stranger tries to access someone else's account with his own identification credentials, the stranger's identification credentials will still be successfully authenticated because they are genuine and not counterfeit, however the stranger will not be successfully authorized to access the account, as the stranger's identification credentials had not been previously set to be eligible to access the account, even if valid (i.e. authentic). Similarly when someone tries to log on a computer, they are usually first requested to identify themselves with a login name and support that with a password. Afterwards, this combination is checked against an existing login-password validity record to check if the combination is authentic. If so, the user becomes authenticated (i.e. the identification he supplied in step 1 is valid, or authentic). Finally, a set of pre-defined permissions and restrictions for that particular login name is assigned to this user, which completes the final step, authorization. Even though authorization cannot occur without authentication, the former term is sometimes used to mean the combination of both. To

distinguish "authentication" from the closely related "authorization", the shorthand notations A1 (authentication), A2 (authorization) as well as AuthN / AuthZ (AuthR) or Au / Az are used in some communities.[9] Normally delegation was considered to be a part of authorization domain. Recently authentication is also used for various type of delegation tasks. Delegation in IT network is also a new but evolving field.[10] Access control Main article: Access control, One familiar use of authentication and authorization is access control. A computer system that is supposed to be used only by those authorized must attempt to detect and exclude the unauthorized. Access to it is therefore usually controlled by insisting on an authentication procedure to establish with some degree of confidence the identity of the user, granting privileges established for that identity. One such procedure involves the usage of Layer 8 which allows IT administrators to identify users, control Internet activity of users in the network, set user based policies and generate reports by username. Common examples of access control involving authentication include:

(15)

tele-network enabled device like mobile phone, as an authentication password/PIN. A computer program using a blind credential to authenticate to another program Entering a country with a passport Logging in to a computer Using a

confirmation E-mail to verify ownership of an e-mail address Using an Internet banking system Withdrawing cash from an ATM In some cases, ease of access is balanced against the strictness of access checks. For example, the credit card network does not require a personal identification number for authentication of the claimed identity; and a small transaction usually does not even require a signature of the authenticated person for proof of authorization of the

transaction. The security of the system is maintained by limiting distribution of credit card numbers, and by the threat of punishment for fraud. Security experts argue that it is impossible to prove the identity of a computer user with absolute certainty. It is only possible to apply one or more tests which, if passed, have been previously declared to be sufficient to proceed. The problem is to determine which tests are sufficient, and many such are inadequate. Any given test can be spoofed one way or another, with varying degrees of difficulty. Computer security experts are now also recognising that despite extensive efforts, as a business, research and network community, we still do not have a secure understanding of the requirements for authentication, in a range of circumstances. Lacking this understanding is a significant barrier to identifying optimum methods of authentication. major questions are:

What is authentication for? Who benefits from authentication/who is disadvantaged by authentication failures? What disadvantages can effective authentication actually guard against? Non-repudiation refers to a state of affairs where the author of a statement will not be able to successfully challenge the authorship of the statement or validity of an associated contract. The term is often seen in a legal setting wherein the authenticity of a signature is being challenged. In such an instance, the authenticity is being "repudiated". Contents 1 In security 1.1 In digital security 2 Trusted third parties (TTPs) 3 See also 4 References 5 External links In security In a general sense non-repudiation

involves associating actions or changes to a unique individual. For a secure area, for example, it may be desirable to implement a key card access system. Non-repudiation would be violated if it were not also a strictly enforced policy to prohibit sharing of the key cards and to immediately report lost or stolen cards. Otherwise determining who performed the action of opening the door cannot be trivially determined. Similarly, for computer accounts, the individual owner of the account must not allow others to use that account, especially, for instance, by giving away their account's password, and a policy should be implemented to enforce this. This prevents the owner of the account from denying actions performed by the account.[1] In digital security Regarding digital security, the cryptological meaning and application of non-repudiation shifts to mean:[2] A service that provides proof of the integrity and origin of data. An authentication that can be asserted to be genuine with high assurance. Proof of data integrity is typically the easiest of these requirements to accomplish. A data hash, such as SHA2, is usually sufficient to establish that the likelihood of data being

(16)

possible to tamper with data in transit, either through a man-in-the-middle attack or phishing. Due to this flaw, data integrity is best asserted when the recipient already possesses the necessary verification information. The most common method of asserting the digital origin of data is through digital certificates, a form of public key infrastructure, to which digital signatures belong. Note that the public key scheme is not used for encryption in this form, confidentiality is not achieved by signing a message with a private key (since anyone can obtain the public key to reverse the signature). Verifying the digital origin means that the certified/signed data can be, with reasonable certainty, trusted to be from somebody who possesses the private key corresponding to the signing

certificate. If the key is not properly safeguarded by the original owner, digital forgery can become a major concern.

Trusted third parties (TTPs) The ways in which a party may attempt to repudiate a signature present a challenge to the trustworthiness of the signatures themselves. The standard approach to mitigating these risks is to involve a trusted third party. The two most common TTPs are forensic analysts and notaries. A forensic analyst specializing in handwriting can look at a

signature, compare it to a known valid signature, and make a reasonable assessment of the legitimacy of the first signature. A notary provides a witness whose job is to verify the identity of an individual by checking other credentials and affixing their certification that the party signing is who they claim to be. Further, a notary provides the extra benefit of maintaining independent logs of their transactions, complete with the type of credential checked and another signature that can independently be verified by the preceding forensic analyst. For this double security, notaries are the preferred form of verification. On the digital side, the only TTP is the repository for public key certificates. This provides the recipient with the ability to verify the origin of an item even if no direct

exchange of the public information has ever been made. The digital signature, however, is forensically identical in both legitimate and forged uses - if someone possesses the private key they can create a "real" signature. The protection of the private key is the idea behind the United States Department of Defense's Common Access Card (CAC), which never allows the key to leave the card and therefore necessitates the possession of the card in addition to the personal identification number (PIN) code necessary to unlock the card for permission to use it for encryption and digital signatures. This article is about the study of topics such as quantity and structure. For other uses, see Mathematics (disambiguation). "Math" redirects here. For other uses, see Math

(disambiguation). Euclid (holding calipers), Greek mathematician, 3rd century BC, as imagined by Raphael in this detail from The School of Athens.[1]

Mathematics (from Greek μάθημα máthēma, “knowledge, study, learning”) is the study of topics such as quantity (numbers),[2] structure,[3] space,[2] and

(17)

phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, and the systematic study of the shapes and motions of physical objects. Practical mathematics has been a human activity for as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry.

Rigorous arguments first appeared in Greek mathematics, most notably in Euclid's Elements. Since the pioneering work of Giuseppe Peano (1858–1932), David Hilbert (1862–1943), and others on axiomatic systems in the late 19th century, it has become customary to view mathematical research as establishing truth by rigorous deduction from appropriately chosen axioms and definitions. Mathematics developed at a relatively slow pace until the Renaissance, when mathematical innovations interacting with new scientific discoveries led to a rapid increase in the rate of mathematical discovery that has continued to the present day.[11] Galileo Galilei (1564–1642) said, "The universe cannot be read until we have learned the language and become familiar with the characters in which it is written. It is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth."[12] Carl Friedrich Gauss (1777–1855) referred to mathematics as "the Queen of the Sciences".[13] Benjamin Peirce (1809–1880) called mathematics "the science that draws necessary conclusions". [14] David Hilbert said of mathematics: "We are not speaking here of

arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules. Rather, it is a conceptual system possessing internal necessity that can only be so and by no means

otherwise."[15] Albert Einstein (1879–1955) stated that "as far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."[16] French mathematician Claire Voisin states "There is creative drive in mathematics, it's all about movement trying to express itself." [17] Mathematics is used throughout the world as an essential tool in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics, the branch of mathematics concerned with application of mathematical knowledge to other fields, inspires and makes use of new mathematical discoveries, which has led to the development of entirely new mathematical disciplines, such as statistics and game theory. Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, and practical applications for what began as pure mathematics are often discovered.[18]

(18)

1 History

1.1 Evolution

1.2 Etymology

2 Definitions of mathematics

2.1 Mathematics as science

3 Inspiration, pure and applied mathematics, and aesthetics

4 Notation, language, and rigor

5 Fields of mathematics

5.1 Foundations and philosophy

5.2 Pure mathematics 5.3 Applied mathematics 6 Mathematical awards

7 See also

8 Notes

9 References

10 Further reading

11 External links

History

Evolution

Main article: History of mathematics

The evolution of mathematics can be seen as an ever-increasing series of abstractions. The first abstraction, which is shared by many animals,[19] was probably that of numbers: the realization that a collection of two apples and a collection of two oranges (for example) have something in common, namely quantity of their members.

Greek mathematician Pythagoras (c. 570 – c. 495 BC), commonly credited with discovering the Pythagorean theorem

(19)

As evidenced by tallies found on bone, in addition to recognizing how to count physical objects, prehistoric peoples may have also recognized how to count abstract quantities, like time – days, seasons, years.[20]

More complex mathematics did not appear until around 3000 BC, when the Babylonians and Egyptians began using arithmetic, algebra and geometry for taxation and other financial calculations, for building and construction, and for astronomy.[21] The earliest uses of mathematics were in trading, land

measurement, painting and weaving patterns and the recording of time.

In Babylonian mathematics elementary arithmetic (addition, subtraction, multiplication and division) first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have been many and diverse, with the first known written numerals created by Egyptians in Middle Kingdom texts such as the Rhind Mathematical Papyrus.[citation needed]

Between 600 and 300 BC the Ancient Greeks began a systematic study of mathematics in its own right with Greek mathematics.[22]

Mathematics has since been greatly extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today. According to Mikhail B. Sevryuk, in the January 2006 issue of the Bulletin of the American Mathematical Society, "The number of papers and books included in the Mathematical Reviews database since 1940 (the first year of operation of MR) is now more than 1.9 million, and more than 75 thousand items are added to the database each year. The overwhelming majority of works in this ocean contain new mathematical theorems and their proofs."[23]

Etymology

(20)

μαθηματικὴ τέχνη (mathēmatikḗ tékhnē), Latin: ars mathematica, meant "the mathematical art".

In Latin, and in English until around 1700, the term mathematics more commonly meant "astrology" (or sometimes "astronomy") rather than

"mathematics"; the meaning gradually changed to its present one from about 1500 to 1800. This has resulted in several mistranslations: a particularly notorious one is Saint Augustine's warning that Christians should beware of mathematici meaning astrologers, which is sometimes mistranslated as a condemnation of mathematicians.[26]

The apparent plural form in English, like the French plural form les mathématiques (and the less commonly used singular derivative la

mathématique), goes back to the Latin neuter plural mathematica (Cicero), based on the Greek plural τα μαθηματικά (ta mathēmatiká), used by Aristotle (384–322 BC), and meaning roughly "all things mathematical"; although it is plausible that English borrowed only the adjective mathematic(al) and formed the noun mathematics anew, after the pattern of physics and metaphysics, which were inherited from the Greek.[27] In English, the noun mathematics takes singular verb forms. It is often shortened to maths or, in English-speaking North America, math.[28]

Definitions of mathematics

Main article: Definitions of mathematics

Leonardo Fibonacci, the Italian mathematician who established the Hindu– Arabic numeral system to the Western World

Aristotle defined mathematics as "the science of quantity", and this

definition prevailed until the 18th century.[29] Starting in the 19th century, when the study of mathematics increased in rigor and began to address abstract topics such as group theory and projective geometry, which have no clear-cut relation to quantity and measurement, mathematicians and philosophers began to propose a variety of new definitions.[30] Some of these definitions emphasize the deductive character of much of mathematics, some emphasize its

(21)

Three leading types of definition of mathematics are called logicist, intuitionist, and formalist, each reflecting a different philosophical school of thought.[31] All have severe problems, none has widespread acceptance, and no reconciliation seems possible.[31]

An early definition of mathematics in terms of logic was Benjamin Peirce's "the science that draws necessary conclusions" (1870).[32] In the Principia Mathematica, Bertrand Russell and Alfred North Whitehead advanced the philosophical program known as logicism, and attempted to prove that all mathematical concepts, statements, and principles can be defined and proven entirely in terms of symbolic logic. A logicist definition of mathematics is Russell's "All Mathematics is Symbolic Logic" (1903).[33]

Intuitionist definitions, developing from the philosophy of mathematician L.E.J. Brouwer, identify mathematics with certain mental phenomena. An

example of an intuitionist definition is "Mathematics is the mental activity which consists in carrying out constructs one after the other."[31] A peculiarity of intuitionism is that it rejects some mathematical ideas considered valid according to other definitions. In particular, while other philosophies of

mathematics allow objects that can be proven to exist even though they cannot be constructed, intuitionism allows only mathematical objects that one can actually construct.

Formalist definitions identify mathematics with its symbols and the rules for operating on them. Haskell Curry defined mathematics simply as "the science of formal systems".[34] A formal system is a set of symbols, or tokens, and some rules telling how the tokens may be combined into formulas. In formal systems, the word axiom has a special meaning, different from the ordinary meaning of "a self-evident truth". In formal systems, an axiom is a combination of tokens that is included in a given formal system without needing to be derived using the rules of the system.

Mathematics as science

Carl Friedrich Gauss, known as the prince of mathematicians

Gauss referred to mathematics as "the Queen of the Sciences".[13] In the original Latin Regina Scientiarum, as well as in German Königin der

Wissenschaften, the word corresponding to science means a "field of

(22)

which contrasted "natural science" to scholasticism, the Aristotelean method of inquiring from first principles. The role of empirical experimentation and

observation is negligible in mathematics, compared to natural sciences such as psychology, biology, or physics. Albert Einstein stated that "as far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."[16] More recently, Marcus du Sautoy has called mathematics "the Queen of Science ... the main driving force behind scientific discovery".[35]

Many philosophers believe that mathematics is not experimentally

falsifiable, and thus not a science according to the definition of Karl Popper.[36] However, in the 1930s Gödel's incompleteness theorems convinced many mathematicians[who?] that mathematics cannot be reduced to logic alone, and Karl Popper concluded that "most mathematical theories are, like those of physics and biology, hypothetico-deductive: pure mathematics therefore turns out to be much closer to the natural sciences whose hypotheses are conjectures, than it seemed even recently."[37] Other thinkers, notably Imre Lakatos, have applied a version of falsificationism to mathematics itself.

An alternative view is that certain scientific fields (such as theoretical physics) are mathematics with axioms that are intended to correspond to reality. The theoretical physicist J.M. Ziman proposed that science is public knowledge, and thus includes mathematics.[38] Mathematics shares much in common with many fields in the physical sciences, notably the exploration of the logical consequences of assumptions. Intuition and experimentation also play a role in the formulation of conjectures in both mathematics and the (other) sciences. Experimental mathematics continues to grow in importance within mathematics, and computation and simulation are playing an increasing role in both the

sciences and mathematics.

The opinions of mathematicians on this matter are varied. Many

(23)

Inspiration, pure and applied mathematics, and aesthetics

Main article: Mathematical beauty

Isaac Newton

Gottfried Wilhelm von Leibniz

Isaac Newton (left) and Gottfried Wilhelm Leibniz (right), developers of infinitesimal calculus

Mathematics arises from many different kinds of problems. At first these were found in commerce, land measurement, architecture and later astronomy; today, all sciences suggest problems studied by mathematicians, and many problems arise within mathematics itself. For example, the physicist Richard Feynman invented the path integral formulation of quantum mechanics using a combination of mathematical reasoning and physical insight, and today's string theory, a still-developing scientific theory which attempts to unify the four fundamental forces of nature, continues to inspire new mathematics.[39]

Some mathematics is relevant only in the area that inspired it, and is applied to solve further problems in that area. But often mathematics inspired by one area proves useful in many areas, and joins the general stock of

mathematical concepts. A distinction is often made between pure mathematics and applied mathematics. However pure mathematics topics often turn out to have applications, e.g. number theory in cryptography. This remarkable fact, that even the "purest" mathematics often turns out to have practical applications, is what Eugene Wigner has called "the unreasonable effectiveness of

mathematics".[40] As in most areas of study, the explosion of knowledge in the scientific age has led to specialization: there are now hundreds of specialized areas in mathematics and the latest Mathematics Subject Classification runs to 46 pages.[41] Several areas of applied mathematics have merged with related traditions outside of mathematics and become disciplines in their own right, including statistics, operations research, and computer science.

For those who are mathematically inclined, there is often a definite aesthetic aspect to much of mathematics. Many mathematicians talk about the elegance of mathematics, its intrinsic aesthetics and inner beauty. Simplicity and generality are valued. There is beauty in a simple and elegant proof, such as Euclid's proof that there are infinitely many prime numbers, and in an elegant numerical method that speeds calculation, such as the fast Fourier transform. G.H. Hardy in A Mathematician's Apology expressed the belief that these

(24)

inevitability, and economy as factors that contribute to a mathematical aesthetic.[42] Mathematicians often strive to find proofs that are particularly elegant, proofs from "The Book" of God according to Paul Erdős.[43][44] The popularity of recreational mathematics is another sign of the pleasure many find in solving mathematical questions.

Notation, language, and rigor

Main article: Mathematical notation

Leonhard Euler, who created and popularized much of the mathematical notation used today

Most of the mathematical notation in use today was not invented until the 16th century.[45] Before that, mathematics was written out in words, a

painstaking process that limited mathematical discovery.[46] Euler (1707–1783) was responsible for many of the notations in use today. Modern notation makes mathematics much easier for the professional, but beginners often find it

daunting. It is extremely compressed: a few symbols contain a great deal of information. Like musical notation, modern mathematical notation has a strict syntax (which to a limited extent varies from author to author and from discipline to discipline) and encodes information that would be difficult to write in any other way.

Mathematical language can be difficult to understand for beginners. Words such as or and only have more precise meanings than in everyday speech. Moreover, words such as open and field have been given specialized

mathematical meanings. Technical terms such as homeomorphism and integrable have precise meanings in mathematics. Additionally, shorthand phrases such as iff for "if and only if" belong to mathematical jargon. There is a reason for special notation and technical vocabulary: mathematics requires more precision than everyday speech. Mathematicians refer to this precision of

language and logic as "rigor".

Mathematical proof is fundamentally a matter of rigor. Mathematicians want their theorems to follow from axioms by means of systematic reasoning. This is to avoid mistaken "theorems", based on fallible intuitions, of which many instances have occurred in the history of the subject.[47] The level of rigor expected in mathematics has varied over time: the Greeks expected detailed arguments, but at the time of Isaac Newton the methods employed were less rigorous. Problems inherent in the definitions used by Newton would lead to a resurgence of careful analysis and formal proof in the 19th century.

(25)

about computer-assisted proofs. Since large computations are hard to verify, such proofs may not be sufficiently rigorous.[48]

Axioms in traditional thought were "self-evident truths", but that conception is problematic.[49] At a formal level, an axiom is just a string of symbols, which has an intrinsic meaning only in the context of all derivable formulas of an axiomatic system. It was the goal of Hilbert's program to put all of mathematics on a firm axiomatic basis, but according to Gödel's incompleteness theorem every (sufficiently powerful) axiomatic system has undecidable

formulas; and so a final axiomatization of mathematics is impossible.

Nonetheless mathematics is often imagined to be (as far as its formal content) nothing but set theory in some axiomatization, in the sense that every

mathematical statement or proof could be cast into formulas within set theory. [50]

Fields of mathematics

See also: Areas of mathematics and Glossary of areas of mathematics

An abacus, a simple calculating tool used since ancient times

Mathematics can, broadly speaking, be subdivided into the study of quantity, structure, space, and change (i.e. arithmetic, algebra, geometry, and analysis). In addition to these main concerns, there are also subdivisions

dedicated to exploring links from the heart of mathematics to other fields: to logic, to set theory (foundations), to the empirical mathematics of the various sciences (applied mathematics), and more recently to the rigorous study of uncertainty. While some areas might seem unrelated, the Langlands program has found connections between areas previously thought unconnected, such as Galois groups, Riemann surfaces and number theory.

Foundations and philosophy

In order to clarify the foundations of mathematics, the fields of

(26)

time, including the controversy over Cantor's set theory and the Brouwer–Hilbert controversy.

Mathematical logic is concerned with setting mathematics within a rigorous axiomatic framework, and studying the implications of such a framework. As such, it is home to Gödel's incompleteness theorems which (informally) imply that any effective formal system that contains basic

arithmetic, if sound (meaning that all theorems that can be proven are true), is necessarily incomplete (meaning that there are true theorems which cannot be proved in that system). Whatever finite collection of number-theoretical axioms is taken as a foundation, Gödel showed how to construct a formal statement that is a true number-theoretical fact, but which does not follow from those axioms. Therefore, no formal system is a complete axiomatization of full number theory. Modern logic is divided into recursion theory, model theory, and proof theory, and is closely linked to theoretical computer science,[citation needed] as well as to category theory.

Theoretical computer science includes computability theory,

computational complexity theory, and information theory. Computability theory examines the limitations of various theoretical models of the computer, including the most well-known model – the Turing machine. Complexity theory is the study of tractability by computer; some problems, although theoretically solvable by computer, are so expensive in terms of time or space that solving them is likely to remain practically unfeasible, even with the rapid advancement of computer hardware. A famous problem is the "P = NP?" problem, one of the Millennium Prize Problems.[52] Finally, information theory is concerned with the amount of data that can be stored on a given medium, and hence deals with concepts such as compression and entropy.

p \Rightarrow q \, Venn A intersect B.svg Commutative diagram

for morphism.svg DFAexample.svg

Mathematical logic Set theory Category theory Theory of

computation

Pure mathematics

Quantity

(27)

which are characterized in arithmetic. The deeper properties of integers are studied in number theory, from which come such popular results as Fermat's Last Theorem. The twin prime conjecture and Goldbach's conjecture are two unsolved problems in number theory.

As the number system is further developed, the integers are recognized as a subset of the rational numbers ("fractions"). These, in turn, are contained within the real numbers, which are used to represent continuous quantities. Real numbers are generalized to complex numbers. These are the first steps of a hierarchy of numbers that goes on to include quaternions and octonions. Consideration of the natural numbers also leads to the transfinite numbers, which formalize the concept of "infinity". Another area of study is size, which leads to the cardinal numbers and then to another conception of infinity: the aleph numbers, which allow meaningful comparison of the size of infinitely large sets.

1, 2, 3,\ldots\! \ldots,-2, -1, 0, 1, 2\,\ldots\! -2, \frac{2}{3},

1.21\,\! -e, \sqrt{2}, 3, \pi\,\! 2, i, -2+3i, 2e^{i\frac{4\pi}{3}}\,\!

Natural numbers Integers Rational numbers Real numbers

Complex numbers

Structure

Many mathematical objects, such as sets of numbers and functions, exhibit internal structure as a consequence of operations or relations that are defined on the set. Mathematics then studies properties of those sets that can be expressed in terms of that structure; for instance number theory studies

properties of the set of integers that can be expressed in terms of arithmetic operations. Moreover, it frequently happens that different such structured sets (or structures) exhibit similar properties, which makes it possible, by a further step of abstraction, to state axioms for a class of structures, and then study at once the whole class of structures satisfying these axioms. Thus one can study groups, rings, fields and other abstract systems; together such studies (for structures defined by algebraic operations) constitute the domain of abstract algebra.

(28)

theory is linear algebra, which is the general study of vector spaces, whose elements called vectors have both quantity and direction, and can be used to model (relations between) points in space. This is one example of the

phenomenon that the originally unrelated areas of geometry and algebra have very strong interactions in modern mathematics. Combinatorics studies ways of enumerating the number of objects that fit a given structure.

\begin{matrix} (1,2,3) & (1,3,2) \\ (2,1,3) & (2,3,1) \\ (3,1,2) & (3,2,1)

\end{matrix} Elliptic curve simple.svg Rubik's cube.svg Group diagdram

D6.svg Lattice of the divisibility of 60.svg

Braid-modular-group-cover.svg

Combinatorics Number theory Group theory Graph theory

Order theory Algebra

Space

The study of space originates with geometry – in particular, Euclidean geometry. Trigonometry is the branch of mathematics that deals with

relationships between the sides and the angles of triangles and with the

(29)

Illustration to Euclid's proof of the Pythagorean theorem.svg Sinusvåg

400px.png Hyperbolic triangle.svg Torus.png Mandel zoom 07

satellite.jpg Measure illustration.png

Geometry Trigonometry Differential geometry Topology

Fractal geometry Measure theory

Change

Understanding and describing change is a common theme in the natural sciences, and calculus was developed as a powerful tool to investigate it. Functions arise here, as a central concept describing a changing quantity. The rigorous study of real numbers and functions of a real variable is known as real analysis, with complex analysis the equivalent field for the complex numbers. Functional analysis focuses attention on (typically infinite-dimensional) spaces of functions. One of many applications of functional analysis is quantum mechanics. Many problems lead naturally to relationships between a quantity and its rate of change, and these are studied as differential equations. Many phenomena in nature can be described by dynamical systems; chaos theory makes precise the ways in which many of these systems exhibit unpredictable yet still deterministic behavior.

Integral as region under curve.svg Vector field.svg Navier Stokes

Laminar.svg Limitcycle.svg Lorenz attractor.svg Conformal grid

after Möbius transformation.svg

Calculus Vector calculus Differential equations Dynamical

systems Chaos theory Complex analysis

Applied mathematics

Applied mathematics concerns itself with mathematical methods that are typically used in science, engineering, business, and industry. Thus, "applied mathematics" is a mathematical science with specialized knowledge. The term applied mathematics also describes the professional specialty in which

mathematicians work on practical problems; as a profession focused on practical problems, applied mathematics focuses on the "formulation, study, and use of mathematical models" in science, engineering, and other areas of mathematical practice.

In the past, practical applications have motivated the development of mathematical theories, which then became the subject of study in pure

(30)

the activity of applied mathematics is vitally connected with research in pure mathematics.

Statistics and other decision sciences

Applied mathematics has significant overlap with the discipline of

statistics, whose theory is formulated mathematically, especially with probability theory. Statisticians (working as part of a research project) "create data that makes sense" with random sampling and with randomized experiments;[53] the design of a statistical sample or experiment specifies the analysis of the data (before the data be available). When reconsidering data from experiments and samples or when analyzing data from observational studies, statisticians "make sense of the data" using the art of modelling and the theory of inference – with model selection and estimation; the estimated models and consequential predictions should be tested on new data.[54]

Statistical theory studies decision problems such as minimizing the risk (expected loss) of a statistical action, such as using a procedure in, for example, parameter estimation, hypothesis testing, and selecting the best. In these

traditional areas of mathematical statistics, a statistical-decision problem is formulated by minimizing an objective function, like expected loss or cost, under specific constraints: For example, designing a survey often involves minimizing the cost of estimating a population mean with a given level of confidence.[55] Because of its use of optimization, the mathematical theory of statistics shares concerns with other decision sciences, such as operations research, control t

Referensi

Dokumen terkait

Sistem keamanan perumahan yang dirancang dapat mengetahui indikasi-indikasi peringatan pencurian yang terjadi pada rumah dan mengirimkan data peringatan tersebut ke

bsrlqdtr tsd dDraL nmlhs

Because it would be time-consuming to add dependencies manually with --add-modules, all automatic modules on the module path are automatically resolved when the application requires

14 tersebut, maka biaya perolehan aset tetap yang diperoleh dengan mengeluarkan saham adalah sebesar niali wajar aset tetap atau nilai wajar saham biasa, mana yang lebih

kelompok pada materi Keanekaragaman Makhluk Hidup dapat lebih baik.. meningkatkan keterampilan berpikir kreatif siswa dibandingkan

Untuk memproyeksikan jumlah penduduk pada waktu yang akan datang dalam jangka waktu relatif pendek dapat dilakukan baik dengan menggunakan metode matematika maupun

Seperti yang telah dijelaskan, bahwa dengan konsep ACCEPT firewall membuka semua port yang ada dan menyeleksi satu per satu port yang ingin diblok.Untuk membantu meningkatkan

Mungkin tidak cukup hanya tulisan yang mampu memberikan kontribusi besar bagi penegakan HAM di Indonesia, namun penulis hanya ingin suatu hal yang kecil mampu memicu hal-hal