Kamis, 24 November 2011

Foundations of Fuzzy Logic

Fuzzy Sets

Fuzzy logic starts with the concept of a fuzzy set. A fuzzy set is a set without a crisp, clearly defined boundary. It can contain elements with only a partial degree of membership.

To understand what a fuzzy set is, first consider the definition of a classical set. A classical set is a container that wholly includes or wholly excludes any given element. For example, the set of days of the week unquestionably includes Monday, Thursday, and Saturday. It just as unquestionably excludes butter, liberty, and dorsal fins, and so on.

This type of set is called a classical set because it has been around for a long time. It was Aristotle who first formulated the Law of the Excluded Middle, which says X must either be in set A or in set not-A. Another version of this law is:

Of any subject, one thing must be either asserted or denied.

To restate this law with annotations: "Of any subject (say Monday), one thing (a day of the week) must be either asserted or denied (I assert that Monday is a day of the week)." This law demands that opposites, the two categories A and not-A, should between them contain the entire universe. Everything falls into either one group or the other. There is no thing that is both a day of the week and not a day of the week.

Now, consider the set of days comprising a weekend. The following diagram attempts to classify the weekend days.

Most would agree that Saturday and Sunday belong, but what about Friday? It feels like a part of the weekend, but somehow it seems like it should be technically excluded. Thus, in the preceding diagram, Friday tries its best to "straddle on the fence." Classical or normal sets would not tolerate this kind of classification. Either something is in or it is out. Human experience suggests something different, however, straddling the fence is part of life.

Of course individual perceptions and cultural background must be taken into account when you define what constitutes the weekend. Even the dictionary is imprecise, defining the weekend as the period from Friday night or Saturday to Monday morning. You are entering the realm where sharp-edged, yes-no logic stops being helpful. Fuzzy reasoning becomes valuable exactly when you work with how people really perceive the concept weekend as opposed to a simple-minded classification useful for accounting purposes only. More than anything else, the following statement lays the foundations for fuzzy logic.

In fuzzy logic, the truth of any statement becomes a matter of degree.

Any statement can be fuzzy. The major advantage that fuzzy reasoning offers is the ability to reply to a yes-no question with a not-quite-yes-or-no answer. Humans do this kind of thing all the time (think how rarely you get a straight answer to a seemingly simple question), but it is a rather new trick for computers.

How does it work? Reasoning in fuzzy logic is just a matter of generalizing the familiar yes-no (Boolean) logic. If you give true the numerical value of 1 and false the numerical value of 0, this value indicates that fuzzy logic also permits in-between values like 0.2 and 0.7453. For instance:

Q: Is Saturday a weekend day?
A: 1 (yes, or true)
Q: Is Tuesday a weekend day?
A: 0 (no, or false)
Q: Is Friday a weekend day?
A: 0.8 (for the most part yes, but not completely)
Q: Is Sunday a weekend day?
A: 0.95 (yes, but not quite as much as Saturday).

The following plot on the left shows the truth values for weekend-ness if you are forced to respond with an absolute yes or no response. On the right, is a plot that shows the truth value for weekend-ness if you are allowed to respond with fuzzy in-between values.

Technically, the representation on the right is from the domain of multivalued logic (or multivalent logic). If you ask the question "Is X a member of set A?" the answer might be yes, no, or any one of a thousand intermediate values in between. Thus, X might have partial membership in A. Multivalued logic stands in direct contrast to the more familiar concept of two-valued (or bivalent yes-no) logic.

To return to the example, now consider a continuous scale time plot of weekend-ness shown in the following plots.

By making the plot continuous, you are defining the degree to which any given instant belongs in the weekend rather than an entire day. In the plot on the left, notice that at midnight on Friday, just as the second hand sweeps past 12, the weekend-ness truth value jumps discontinuously from 0 to 1. This is one way to define the weekend, and while it may be useful to an accountant, it may not really connect with your own real-world experience of weekend-ness.

The plot on the right shows a smoothly varying curve that accounts for the fact that all of Friday, and, to a small degree, parts of Thursday, partake of the quality of weekend-ness and thus deserve partial membership in the fuzzy set of weekend moments. The curve that defines the weekend-ness of any instant in time is a function that maps the input space (time of the week) to the output space (weekend-ness). Specifically it is known as a membership function. See Membership Functions for a more detailed discussion.

As another example of fuzzy sets, consider the question of seasons. What season is it right now? In the northern hemisphere, summer officially begins at the exact moment in the earth's orbit when the North Pole is pointed most directly toward the sun. It occurs exactly once a year, in late June. Using the astronomical definitions for the season, you get sharp boundaries as shown on the left in the figure that follows. But what you experience as the seasons vary more or less continuously as shown on the right in the following figure (in temperate northern hemisphere climates).

Membership Functions

A membership function (MF) is a curve that defines how each point in the input space is mapped to a membership value (or degree of membership) between 0 and 1. The input space is sometimes referred to as the universe of discourse, a fancy name for a simple concept.

One of the most commonly used examples of a fuzzy set is the set of tall people. In this case, the universe of discourse is all potential heights, say from 3 feet to 9 feet, and the word tall would correspond to a curve that defines the degree to which any person is tall. If the set of tall people is given the well-defined (crisp) boundary of a classical set, you might say all people taller than 6 feet are officially considered tall. However, such a distinction is clearly absurd. It may make sense to consider the set of all real numbers greater than 6 because numbers belong on an abstract plane, but when we want to talk about real people, it is unreasonable to call one person short and another one tall when they differ in height by the width of a hair.

If the kind of distinction shown previously is unworkable, then what is the right way to define the set of tall people? Much as with the plot of weekend days, the figure following shows a smoothly varying curve that passes from not-tall to tall. The output-axis is a number known as the membership value between 0 and 1. The curve is known as a membership function and is often given the designation of µ. This curve defines the transition from not tall to tall. Both people are tall to some degree, but one is significantly less tall than the other.

Subjective interpretations and appropriate units are built right into fuzzy sets. If you say "She's tall," the membership function tall should already take into account whether you are referring to a six-year-old or a grown woman. Similarly, the units are included in the curve. Certainly it makes no sense to say "Is she tall in inches or in meters?"

Membership Functions in Fuzzy Logic Toolbox Software

The only condition a membership function must really satisfy is that it must vary between 0 and 1. The function itself can be an arbitrary curve whose shape we can define as a function that suits us from the point of view of simplicity, convenience, speed, and efficiency.

A classical set might be expressed as

A = {x | x > 6}

A fuzzy set is an extension of a classical set. If X is the universe of discourse and its elements are denoted by x, then a fuzzy set A in X is defined as a set of ordered pairs.

A = {x, µA(x) | x ∈ X}

µA(x) is called the membership function (or MF) of x in A. The membership function maps each element of X to a membership value between 0 and 1.

The toolbox includes 11 built-in membership function types. These 11 functions are, in turn, built from several basic functions:

  • piece-wise linear functions

  • the Gaussian distribution function

  • the sigmoid curve

  • quadratic and cubic polynomial curves

For detailed information on any of the membership functions mentioned next, turn to Functions — Alphabetical List. By convention, all membership functions have the letters mf at the end of their names.

The simplest membership functions are formed using straight lines. Of these, the simplest is the triangular membership function, and it has the function name trimf. This function is nothing more than a collection of three points forming a triangle. The trapezoidal membership function, trapmf, has a flat top and really is just a truncated triangle curve. These straight line membership functions have the advantage of simplicity.

Two membership functions are built on the Gaussian distribution curve: a simple Gaussian curve and a two-sided composite of two different Gaussian curves. The two functions are gaussmf and gauss2mf.

The generalized bell membership function is specified by three parameters and has the function name gbellmf. The bell membership function has one more parameter than the Gaussian membership function, so it can approach a non-fuzzy set if the free parameter is tuned. Because of their smoothness and concise notation, Gaussian and bell membership functions are popular methods for specifying fuzzy sets. Both of these curves have the advantage of being smooth and nonzero at all points.

Although the Gaussian membership functions and bell membership functions achieve smoothness, they are unable to specify asymmetric membership functions, which are important in certain applications. Next, you define the sigmoidal membership function, which is either open left or right. Asymmetric and closed (i.e. not open to the left or right) membership functions can be synthesized using two sigmoidal functions, so in addition to the basic sigmf, you also have the difference between two sigmoidal functions, dsigmf, and the product of two sigmoidal functions psigmf.

Polynomial based curves account for several of the membership functions in the toolbox. Three related membership functions are the Z, S, and Pi curves, all named because of their shape. The function zmf is the asymmetrical polynomial curve open to the left, smf is the mirror-image function that opens to the right, and pimf is zero on both extremes with a rise in the middle.

There is a very wide selection to choose from when you're selecting a membership function. You can also create your own membership functions with the toolbox. However, if a list based on expanded membership functions seems too complicated, just remember that you could probably get along very well with just one or two types of membership functions, for example the triangle and trapezoid functions. The selection is wide for those who want to explore the possibilities, but expansive membership functions are not necessary for good fuzzy inference systems. Finally, remember that more details are available on all these functions in the reference section.

Summary of Membership Functions

  • Fuzzy sets describe vague concepts (e.g., fast runner, hot weather, weekend days).

  • A fuzzy set admits the possibility of partial membership in it. (e.g., Friday is sort of a weekend day, the weather is rather hot).

  • The degree an object belongs to a fuzzy set is denoted by a membership value between 0 and 1. (e.g., Friday is a weekend day to the degree 0.8).

  • A membership function associated with a given fuzzy set maps an input value to its appropriate membership value.

Logical Operations

Now that you understand the fuzzy inference, you need to see how fuzzy inference connects with logical operations.

The most important thing to realize about fuzzy logical reasoning is the fact that it is a superset of standard Boolean logic. In other words, if you keep the fuzzy values at their extremes of 1 (completely true), and 0 (completely false), standard logical operations will hold. As an example, consider the following standard truth tables.

Now, because in fuzzy logic the truth of any statement is a matter of degree, can these truth tables be altered? The input values can be real numbers between 0 and 1. What function preserves the results of the AND truth table (for example) and also extend to all real numbers between 0 and 1?

One answer is the min operation. That is, resolve the statement A AND B, where A and B are limited to the range (0,1), by using the function min(A,B). Using the same reasoning, you can replace the OR operation with the max function, so that A OR B becomes equivalent to max(A,B). Finally, the operation NOT A becomes equivalent to the operation . Notice how the previous truth table is completely unchanged by this substitution.

Moreover, because there is a function behind the truth table rather than just the truth table itself, you can now consider values other than 1 and 0.

The next figure uses a graph to show the same information. In this figure, the truth table is converted to a plot of two fuzzy sets applied together to create one fuzzy set. The upper part of the figure displays plots corresponding to the preceding two-valued truth tables, while the lower part of the figure displays how the operations work over a continuously varying range of truth values A and B according to the fuzzy operations you have defined.

Given these three functions, you can resolve any construction using fuzzy sets and the fuzzy logical operation AND, OR, and NOT.

Additional Fuzzy Operators

In this case, you defined only one particular correspondence between two-valued and multivalued logical operations for AND, OR, and NOT. This correspondence is by no means unique.

In more general terms, you are defining what are known as the fuzzy intersection or conjunction (AND), fuzzy union or disjunction (OR), and fuzzy complement (NOT). The classical operators for these functions are: AND = min, OR = max, and NOT = additive complement. Typically, most fuzzy logic applications make use of these operations and leave it at that. In general, however, these functions are arbitrary to a surprising degree. Fuzzy Logic Toolbox software uses the classical operator for the fuzzy complement as shown in the previous figure, but also enables you to customize the AND and OR operators.

The intersection of two fuzzy sets A and B is specified in general by a binary mapping T, which aggregates two membership functions as follows:

µA∩B(x) = TA(x), µB(x))

For example, the binary operator T may represent the multiplication of . These fuzzy intersection operators, which are usually referred to as T-norm (Triangular norm) operators, meet the following basic requirements:

A T-norm operator is a binary mapping T(.,.) satisfying
boundary: T(0, 0) = 0, T(a, 1) = T(1, a) = a
monotonicity: T(a, b) <= T(c, d) if a <= c and b <= d
commutativity: T(a, b) = T(b, a)
associativity: T(a, T(b, c)) = T(T(a, b), c)

The first requirement imposes the correct generalization to crisp sets. The second requirement implies that a decrease in the membership values in A or B cannot produce an increase in the membership value in A intersection B. The third requirement indicates that the operator is indifferent to the order of the fuzzy sets to be combined. Finally, the fourth requirement allows us to take the intersection of any number of sets in any order of pair-wise groupings.

Like fuzzy intersection, the fuzzy union operator is specified in general by a binary mapping S:

µA∪B(x) = S(µA(x), µB(x))

For example, the binary operator S can represent the addition of . These fuzzy union operators, which are often referred to as T-conorm (or S-norm) operators, must satisfy the following basic requirements:

A T-conorm (or S-norm) operator is a binary mapping S(.,.) satisfying
boundary: S(1, 1) = 1, S(a, 0) = S(0, a) = a
monotonicity: S(a, b) <= S(c, d) if a <= c and b <= d
commutativity: S(a, b) = S(b, a)
associativity: S(a, S(b, c)) = S(S(a, b), c)

Several parameterized T-norms and dual T-conorms have been proposed in the past, such as those of Yager[19], Dubois and Prade [3], Schweizer and Sklar [14], and Sugeno [15], found in the Bibliography. Each of these provides a way to vary the gain on the function so that it can be very restrictive or very permissive.

If-Then Rules

Fuzzy sets and fuzzy operators are the subjects and verbs of fuzzy logic. These if-then rule statements are used to formulate the conditional statements that comprise fuzzy logic.

A single fuzzy if-then rule assumes the form

if x is A then y is B

where A and B are linguistic values defined by fuzzy sets on the ranges (universes of discourse) X and Y, respectively. The if-part of the rule "x is A" is called the antecedent or premise, while the then-part of the rule "y is B" is called the consequent or conclusion. An example of such a rule might be

If service is good then tip is average

The concept good is represented as a number between 0 and 1, and so the antecedent is an interpretation that returns a single number between 0 and 1. Conversely, average is represented as a fuzzy set, and so the consequent is an assignment that assigns the entire fuzzy set B to the output variable y. In the if-then rule, the word is gets used in two entirely different ways depending on whether it appears in the antecedent or the consequent. In MATLAB terms, this usage is the distinction between a relational test using "==" and a variable assignment using the "=" symbol. A less confusing way of writing the rule would be

If service == good then tip = average

In general, the input to an if-then rule is the current value for the input variable (in this case, service) and the output is an entire fuzzy set (in this case, average). This set will later be defuzzified, assigning one value to the output. The concept of defuzzification is described in the next section.

Interpreting an if-then rule involves distinct parts: first evaluating the antecedent (which involves fuzzifying the input and applying any necessary fuzzy operators) and second applying that result to the consequent (known as implication). In the case of two-valued or binary logic, if-then rules do not present much difficulty. If the premise is true, then the conclusion is true. If you relax the restrictions of two-valued logic and let the antecedent be a fuzzy statement, how does this reflect on the conclusion? The answer is a simple one. if the antecedent is true to some degree of membership, then the consequent is also true to that same degree.

Thus:

in binary logic: pq (p and q are either both true or both false.)
in fuzzy logic: 0.5 p → 0.5 q (Partial antecedents provide partial implication.)

The antecedent of a rule can have multiple parts.

if sky is gray and wind is strong and barometer is falling, then ...

in which case all parts of the antecedent are calculated simultaneously and resolved to a single number using the logical operators described in the preceding section. The consequent of a rule can also have multiple parts.

if temperature is cold then hot water valve is open and cold water valve is shut

in which case all consequents are affected equally by the result of the antecedent. How is the consequent affected by the antecedent? The consequent specifies a fuzzy set be assigned to the output. The implication function then modifies that fuzzy set to the degree specified by the antecedent. The most common ways to modify the output fuzzy set are truncation using the min function (where the fuzzy set is truncated as shown in the following figure) or scaling using the prod function (where the output fuzzy set is squashed). Both are supported by the toolbox, but you use truncation for the examples in this section.

Summary of If-Then Rules

Interpreting if-then rules is a three-part process. This process is explained in detail in the next section:

  1. Fuzzify inputs: Resolve all fuzzy statements in the antecedent to a degree of membership between 0 and 1. If there is only one part to the antecedent, then this is the degree of support for the rule.

  2. Apply fuzzy operator to multiple part antecedents: If there are multiple parts to the antecedent, apply fuzzy logic operators and resolve the antecedent to a single number between 0 and 1. This is the degree of support for the rule.

  3. Apply implication method: Use the degree of support for the entire rule to shape the output fuzzy set. The consequent of a fuzzy rule assigns an entire fuzzy set to the output. This fuzzy set is represented by a membership function that is chosen to indicate the qualities of the consequent. If the antecedent is only partially true, (i.e., is assigned a value less than 1), then the output fuzzy set is truncated according to the implication method.

In general, one rule alone is not effective. Two or more rules that can play off one another are needed. The output of each rule is a fuzzy set. The output fuzzy sets for each rule are then aggregated into a single output fuzzy set. Finally the resulting set is defuzzified, or resolved to a single number. Fuzzy Inference Systems shows how the whole process works from beginning to end for a particular type of fuzzy inference system called a Mamdani type.

http://www.mathworks.com/help/toolbox/fuzzy/bp78l6_-1.html

Fuzzy Clustering

What is Data Clustering

Clustering of numerical data forms the basis of many classification and system modeling algorithms. The purpose of clustering is to identify natural groupings of data from a large data set to produce a concise representation of a system's behavior.

Fuzzy Logic Toolbox tools allow you to find clusters in input-output training data. You can use the cluster information to generate a Sugeno-type fuzzy inference system that best models the data behavior using a minimum number of rules. The rules partition themselves according to the fuzzy qualities associated with each of the data clusters. Use the command-line function, genfis2 to automatically accomplish this type of FIS generation.

Fuzzy C-Means Clustering

Fuzzy c-means (FCM) is a data clustering technique wherein each data point belongs to a cluster to some degree that is specified by a membership grade. This technique was originally introduced by Jim Bezdek in 1981[1] as an improvement on earlier clustering methods. It provides a method that shows how to group data points that populate some multidimensional space into a specific number of different clusters.

Fuzzy Logic Toolbox command line function fcm starts with an initial guess for the cluster centers, which are intended to mark the mean location of each cluster. The initial guess for these cluster centers is most likely incorrect. Additionally, fcm assigns every data point a membership grade for each cluster. By iteratively updating the cluster centers and the membership grades for each data point, fcm iteratively moves the cluster centers to the right location within a data set. This iteration is based on minimizing an objective function that represents the distance from any given data point to a cluster center weighted by that data point's membership grade.

The command line function fcm outputs a list of cluster centers and several membership grades for each data point. You can use the information returned by fcm to help you build a fuzzy inference system by creating membership functions to represent the fuzzy qualities of each cluster.

An Example: 2-D Clusters

You can use quasi-random two-dimensional data to illustrate how FCM clustering works. To load the data set and plot it, type the following commands:

load fcmdata.dat
plot(fcmdata(:,1),fcmdata(:,2),'o')

Next, invoke the command-line function fcm to find two clusters in this data set until the objective function is no longer decreasing much at all.

[center,U,objFcn] = fcm(fcmdata,2);

Here, the variable center contains the coordinates of the two cluster centers, U contains the membership grades for each of the data points, and objFcn contains a history of the objective function across the iterations.

This command returns the following result:

Iteration count = 1, obj. fcn = 8.794048
Iteration count = 2, obj. fcn = 6.986628
.....
Iteration count = 12, obj. fcn = 3.797430

The fcm function is an iteration loop built on top of the following routines:

  • initfcm — initializes the problem

  • distfcm — performs Euclidean distance calculation

  • stepfcm — performs one iteration of clustering

To view the progress of the clustering, plot the objective function by typing the following commands:

figure
plot(objFcn)
title('Objective Function Values')
xlabel('Iteration Count')
ylabel('Objective Function Value')

Finally, plot the two cluster centers found by the fcm function using the following code:

maxU = max(U);
index1 = find(U(1, :) == maxU);
index2 = find(U(2, :) == maxU);
figure
line(fcmdata(index1, 1), fcmdata(index1, 2), 'linestyle',...
'none','marker', 'o','color','g');
line(fcmdata(index2,1),fcmdata(index2,2),'linestyle',...
'none','marker', 'x','color','r');
hold on
plot(center(1,1),center(1,2),'ko','markersize',15,'LineWidth',2)
plot(center(2,1),center(2,2),'kx','markersize',15,'LineWidth',2)

    Note Every time you run this example, the fcm function initializes with different initial conditions. This behavior swaps the order in which the cluster centers are computed and plotted.

In the following figure, the large characters indicate cluster centers.

Subtractive Clustering

If you do not have a clear idea how many clusters there should be for a given set of data, Subtractive clustering, [2], is a fast, one-pass algorithm for estimating the number of clusters and the cluster centers in a set of data. The cluster estimates, which are obtained from the subclust function, can be used to initialize iterative optimization-based clustering methods (fcm) and model identification methods (like anfis). The subclust function finds the clusters by using the subtractive clustering method.

The genfis2 function builds upon the subclust function to provide a fast, one-pass method to take input-output training data and generate a Sugeno-type fuzzy inference system that models the data behavior.

An Example: Suburban Commuting

In this example, you apply the genfis2 function to model the relationship between the number of automobile trips generated from an area and the area's demographics. Demographic and trip data are from 100 traffic analysis zones in New Castle County, Delaware. Five demographic factors are considered: population, number of dwelling units, vehicle ownership, median household income, and total employment. Hence, the model has five input variables and one output variable.

Load and plot the data by typing the following commands:

clear
close all
mytripdata
subplot(2,1,1), plot(datin)
subplot(2,1,2), plot(datout)

The next figure displays the input and the output data.

The function tripdata creates several variables in the workspace. Of the original 100 data points, use 75 data points as training data (datin and datout) and 25 data points as checking data, (as well as for test data to validate the model). The checking data input/output pairs are denoted by chkdatin and chkdatout.

Use the genfis2 function to generate a model from data using clustering. genfis2 requires you to specify a cluster radius. The cluster radius indicates the range of influence of a cluster when you consider the data space as a unit hypercube. Specifying a small cluster radius usually yields many small clusters in the data, and results in many rules. Specifying a large cluster radius usually yields a few large clusters in the data, and results in fewer rules. The cluster radius is specified as the third argument of genfis2. The following syntax calls the genfis2 function using a cluster radius of 0.5.

 fismat=genfis2(datin,datout,0.5);

The genfis2 function is a fast, one-pass method that does not perform any iterative optimization. A FIS structure is returned; the model type for the FIS structure is a first order Sugeno model with three rules.

Use the following commands to verify the model. Here, trnRMSE is the root mean square error of the system generated by the training data.

fuzout=evalfis(datin,fismat);
trnRMSE=norm(fuzout-datout)/sqrt(length(fuzout))

These commands return the following result:

trnRMSE =
0.5276

Next, apply the test data to the FIS to validate the model. In this example, the checking data is used for both checking and testing the FIS parameters. Here, chkRMSE is the root mean square error of the system generated by the checking data.

chkfuzout=evalfis(chkdatin,fismat);
chkRMSE=norm(chkfuzout-chkdatout)/sqrt(length(chkfuzout))

These commands return the following result:

chkRMSE =
0.6179

Use the following commands to plot the output of the model chkfuzout against the checking data chkdatout.

figure
plot(chkdatout)
hold on
plot(chkfuzout,'o')
hold off

The model output and checking data are shown as circles and solid blue line, respectively. The plot shows the model does not perform well on the checking data.

At this point, you can use the optimization capability of anfis to improve the model. First, try using a relatively short anfis training (20 epochs) without implementing the checking data option, and then test the resulting FIS model against the testing data. To perform the optimization, type the following command:

fismat2=anfis([datin datout],fismat,[20 0 0.1]);

Here, 20 is the number of epochs, 0 is the training error goal, and 0.1 is the initial step size.

This command returns the following result:

ANFIS info:
Number of nodes: 44
Number of linear parameters: 18
Number of nonlinear parameters: 30
Total number of parameters: 48
Number of training data pairs: 75
Number of checking data pairs: 0
Number of fuzzy rules: 3

Start training ANFIS ...

1 0.527607
.
.
20 0.420275

Designated epoch number reached --> ANFIS training completed at epoch 20.

After the training is done, validate the model by typing the following commands:

fuzout2=evalfis(datin,fismat2);
trnRMSE2=norm(fuzout2-datout)/sqrt(length(fuzout2))
chkfuzout2=evalfis(chkdatin,fismat2);
chkRMSE2=norm(chkfuzout2-chkdatout)/sqrt(length(chkfuzout2))

These commands return the following results:

trnRMSE2 =
0.4203
chkRMSE2 =
0.5894

The model has improved a lot with respect to the training data, but only a little with respect to the checking data. Plot the improved model output obtained using anfis against the testing data by typing the following commands:

figure
plot(chkdatout)
hold on
plot(chkfuzout2,'o')
hold off

The next figure shows the model output.

The model output and checking data are shown as circles and solid blue line, respectively. This plot shows that genfis2 can be used as a stand-alone, fast method for generating a fuzzy model from data, or as a preprocessor to anfis for determining the initial rules. An important advantage of using a clustering method to find rules is that the resultant rules are more tailored to the input data than they are in a FIS generated without clustering. This reduces the problem of an excessive propagation of rules when the input data has a high dimension.

Overfitting

Overfitting can be detected when the checking error starts to increase while the training error continues to decrease.

To check the model for overfitting, use anfis with the checking data option to train the model for 200 epochs. Here, fismat3 is the FIS structure when the training error reaches a minimum. fismat4 is the snapshot FIS structure taken when the checking data error reaches a minimum.

[fismat3,trnErr,stepSize,fismat4,chkErr]= ...
anfis([datin datout],fismat,[200 0 0.1],[], ...
[chkdatin chkdatout]);

This command returns a list of output arguments. The output arguments show a history of the step sizes, the RMSE using the training data, and the RMSE using the checking data for each training epoch.

 1   0.527607   0.617875
2 0.513727 0.615487
.
.
200 0.326576 0.601531

Designated epoch number reached --> ANFIS training completed at
epoch 200.

After the training completes, validate the model by typing the following commands:

fuzout4=evalfis(datin,fismat4);
trnRMSE4=norm(fuzout4-datout)/sqrt(length(fuzout4))
chkfuzout4=evalfis(chkdatin,fismat4);
chkRMSE4=norm(chkfuzout4-chkdatout)/sqrt(length(chkfuzout4))

These commands return the following results:

trnRMSE4 =
0.3393
chkRMSE4 =
0.5833

The error with the training data is the lowest thus far, and the error with the checking data is also slightly lower than before. This result suggests perhaps there is an overfit of the system to the training data. Overfitting occurs when you fit the fuzzy system to the training data so well that it no longer does a very good job of fitting the checking data. The result is a loss of generality.

To view the improved model output, plot the model output against the checking data by typing the following commands:

figure
plot(chkdatout)
hold on
plot(chkfuzout4,'o')
hold off

The model output and checking data are shown as circles and solid blue line, respectively.

Next, plot the training error trnErr by typing the following commands:

figure
plot(trnErr)
Title('Training Error')
xlabel('Number of Epochs')
ylabel('Training Error')

This plot shows that the training error settles at about the 60th epoch point.

Plot the checking error chkErr by typing the following commands:

figure
plot(chkErr)
Title('Checking Error')
xlabel('Number of Epochs')
ylabel('Checking Error')

The plot shows that the smallest value of the checking data error occurs at the 52nd epoch, after which it increases slightly even as anfis continues to minimize the error against the training data all the way to the 200th epoch. Depending on the specified error tolerance, the plot also indicates the model's ability to generalize the test data.

You can also compare the output of fismat2 and fistmat4 against the checking data chkdatout by typing the following commands:

figure
plot(chkdatout)
hold on
plot(chkfuzout4,'ob')
plot(chkfuzout2,'+r')

Data Clustering Using the Clustering GUI Tool

The Clustering GUI Tool implements the fuzzy data clustering functions fcm and subclust and lets you perform clustering on the data. For more information on the clustering functions, see Fuzzy C-Means Clustering and Subtractive Clustering.

To start the GUI, type the following command at the MATLAB command prompt:

findcluster

The Clustering GUI Tool shown in the next figure.

This GUI lets you perform the following tasks:

  1. Load and plot the data.

  2. Start the clustering.

  3. Save the cluster center.

Access the online help topics by clicking Info or using the Help menu in the Clustering GUI.

Loading and Plotting the Data

To load a data set in the GUI, perform either of the following actions:

  • Click Load Data, and select the file containing the data.

  • Open the GUI with a data set directly by invoking findcluster with the data set as the argument, in the MATLAB Command Window.

    The data set must have the extension.dat. For example, to load the data set, clusterdemo.dat, type findcluster('clusterdemo.dat').

The Clustering GUI Tool works on multidimensional data sets, but displays only two of those dimensions on the plot. To select other dimensions in the data set for plotting, you can use the drop-down lists under X-axis and Y-axis.

Starting the Clustering

To start clustering the data:

  1. Choose the clustering function fcm (fuzzy C-Means clustering) or subtractiv (subtractive clustering) from the drop-down menu under Methods.

  2. Set options for the selected method using the Influence Range, Squash, Aspect Ratio, and Reject Ratio fields.

    For more information on these methods and their options, refer to fcm, and subclust respectively.

  3. Begin clustering by clicking Start.

    After clustering gets completed, the cluster centers appear in black as shown in the next figure.

Saving the Cluster Center

To save the cluster centers, click Save Center.

http://www.mathworks.com/help/toolbox/fuzzy/fp310.html#FP2434

Kamis, 10 November 2011

Matematika

Matematika (dari bahasa Yunani: μαθηματικά - mathēmatiká) adalah studi besaran, struktur, ruang, dan perubahan. Para matematikawan mencari berbagai pola, merumuskan konjektur baru, dan membangun kebenaran melalui metode deduksi yang kaku dari aksioma-aksioma dan definisi-definisi yang bersesuaian.

Terdapat perselisihan tentang apakah objek-objek matematika seperti bilangan dan titik hadir secara alami, atau hanyalah buatan manusia. Seorang matematikawan Benjamin Peirce menyebut matematika sebagai "ilmu yang menggambarkan simpulan-simpulan yang penting".Di pihak lain, Albert Einstein menyatakan bahwa "sejauh hukum-hukum matematika merujuk kepada kenyataan, mereka tidaklah pasti; dan sejauh mereka pasti, mereka tidak merujuk kepada kenyataan."

Melalui penggunaan penalaran logika dan abstraksi, matematika berkembang dari pencacahan, perhitungan, pengukuran, dan pengkajian sistematis terhadap bangun dan pergerakan benda-benda fisika. Matematika praktis telah menjadi kegiatan manusia sejak adanya rekaman tertulis. Argumentasi kaku pertama muncul di dalam Matematika Yunani, terutama di dalam karya Euklides, Elemen. Matematika selalu berkembang, misalnya di Cina pada tahun 300 SM, di India pada tahun 100 M, dan di Arab pada tahun 800 M, hingga zaman Renaisans, ketika temuan baru matematika berinteraksi dengan penemuan ilmiah baru yang mengarah pada peningkatan yang cepat di dalam laju penemuan matematika yang berlanjut hingga kini.

Kini, matematika digunakan di seluruh dunia sebagai alat penting di berbagai bidang, termasuk ilmu alam, teknik, kedokteran/medis, dan ilmu sosial seperti ekonomi, dan psikologi. Matematika terapan, cabang matematika yang melingkupi penerapan pengetahuan matematika ke bidang-bidang lain, mengilhami dan membuat penggunaan temuan-temuan matematika baru, dan kadang-kadang mengarah pada pengembangan disiplin-disiplin ilmu yang sepenuhnya baru, seperti statistika dan teori permainan. Para matematikawan juga bergulat di dalam matematika murni, atau matematika untuk perkembangan matematika itu sendiri, tanpa adanya penerapan di dalam pikiran, meskipun penerapan praktis yang menjadi latar munculnya matematika murni ternyata seringkali ditemukan terkemudian.

Etimologi
Kata "matematika" berasal dari bahasa Yunani Kuno μάθημα (máthēma), yang berarti pengkajian, pembelajaran, ilmu, yang ruang lingkupnya menyempit, dan arti teknisnya menjadi "pengkajian matematika", bahkan demikian juga pada zaman kuno. Kata sifatnya adalah μαθηματικός (mathēmatikós), berkaitan dengan pengkajian, atau tekun belajar, yang lebih jauhnya berarti matematis. Secara khusus, μαθηματικὴ τέχνη (mathēmatikḗ tékhnē), di dalam bahasa Latin ars mathematica, berarti seni matematika.

Bentuk jamak sering dipakai di dalam bahasa Inggris, seperti juga di dalam bahasa Perancis les mathématiques (dan jarang digunakan sebagai turunan bentuk tunggal la mathématique), merujuk pada bentuk jamak bahasa Latin yang cenderung netral mathematica (Cicero), berdasarkan bentuk jamak bahasa Yunani τα μαθηματικά (ta mathēmatiká), yang dipakai Aristotle, yang terjemahan kasarnya berarti "segala hal yang matematis". Tetapi, di dalam bahasa Inggris, kata benda mathematics mengambil bentuk tunggal bila dipakai sebagai kata kerja. Di dalam ragam percakapan, matematika kerap kali disingkat sebagai math di Amerika Utara dan maths di tempat lain.

Sejarah
Evolusi matematika dapat dipandang sebagai sederetan abstraksi yang selalu bertambah banyak, atau perkataan lainnya perluasan pokok masalah. Abstraksi mula-mula, yang juga berlaku pada banyak binatang, adalah tentang bilangan: pernyataan bahwa dua apel dan dua jeruk (sebagai contoh) memiliki jumlah yang sama.

Selain mengetahui cara mencacah objek-objek fisika, manusia prasejarah juga mengenali cara mencacah besaran abstrak, seperti waktu — hari, musim, tahun. Aritmetika dasar (penjumlahan, pengurangan, perkalian, dan pembagian) mengikuti secara alami.

Langkah selanjutnya memerlukan penulisan atau sistem lain untuk mencatatkan bilangan, semisal tali atau dawai bersimpul yang disebut quipu dipakai oleh bangsa Inca untuk menyimpan data numerik. Sistem bilangan ada banyak dan bermacam-macam, bilangan tertulis yang pertama diketahui ada di dalam naskah warisan Mesir Kuno di Kerajaan Tengah Mesir, Lembaran Matematika Rhind.
Sistem bilangan Maya

Penggunaan terkuno matematika adalah di dalam perdagangan, pengukuran tanah, pelukisan, dan pola-pola penenunan dan pencatatan waktu dan tidak pernah berkembang luas hingga tahun 3000 SM ke muka ketika orang Babilonia dan Mesir Kuno mulai menggunakan aritmetika, aljabar, dan geometri untuk penghitungan pajak dan urusan keuangan lainnya, bangunan dan konstruksi, dan astronomi. Pengkajian matematika yang sistematis di dalam kebenarannya sendiri dimulai pada zaman Yunani Kuno antara tahun 600 dan 300 SM.

Matematika sejak saat itu segera berkembang luas, dan terdapat interaksi bermanfaat antara matematika dan sains, menguntungkan kedua belah pihak. Penemuan-penemuan matematika dibuat sepanjang sejarah dan berlanjut hingga kini. Menurut Mikhail B. Sevryuk, pada Januari 2006 terbitan Bulletin of the American Mathematical Society, "Banyaknya makalah dan buku yang dilibatkan di dalam basis data Mathematical Reviews sejak 1940 (tahun pertama beroperasinya MR) kini melebihi 1,9 juta, dan melebihi 75 ribu artikel ditambahkan ke dalam basis data itu tiap tahun. Sebagian besar karya di samudera ini berisi teorema matematika baru beserta bukti-buktinya."

Ilham, matematika murni dan terapan, dan estetika

Matematika muncul pada saat dihadapinya masalah-masalah yang rumit yang melibatkan kuantitas, struktur, ruang, atau perubahan. Mulanya masalah-masalah itu dijumpai di dalam perdagangan, pengukuran tanah, dan kemudian astronomi; kini, semua ilmu pengetahuan menganjurkan masalah-masalah yang dikaji oleh para matematikawan, dan banyak masalah yang muncul di dalam matematika itu sendiri. Misalnya, seorang fisikawan Richard Feynman menemukan rumus integral lintasan mekanika kuantum menggunakan paduan nalar matematika dan wawasan fisika, dan teori dawai masa kini, teori ilmiah yang masih berkembang yang berupaya membersatukan empat gaya dasar alami, terus saja mengilhami matematika baru. Beberapa matematika hanya bersesuaian di dalam wilayah yang mengilhaminya, dan diterapkan untuk memecahkan masalah lanjutan di wilayah itu. Tetapi seringkali matematika diilhami oleh bukti-bukti di satu wilayah ternyata bermanfaat juga di banyak wilayah lainnya, dan menggabungkan persediaan umum konsep-konsep matematika. Fakta yang menakjubkan bahwa matematika "paling murni" sering beralih menjadi memiliki terapan praktis adalah apa yang Eugene Wigner memanggilnya sebagai "Ketidakefektifan Matematika tak ternalar di dalam Ilmu Pengetahuan Alam".

Seperti di sebagian besar wilayah pengkajian, ledakan pengetahuan di zaman ilmiah telah mengarah pada pengkhususan di dalam matematika. Satu perbedaan utama adalah di antara matematika murni dan matematika terapan: sebagian besar matematikawan memusatkan penelitian mereka hanya pada satu wilayah ini, dan kadang-kadang pilihan ini dibuat sedini perkuliahan program sarjana mereka. Beberapa wilayah matematika terapan telah digabungkan dengan tradisi-tradisi yang bersesuaian di luar matematika dan menjadi disiplin yang memiliki hak tersendiri, termasuk statistika, riset operasi, dan ilmu komputer.

Mereka yang berminat kepada matematika seringkali menjumpai suatu aspek estetika tertentu di banyak matematika. Banyak matematikawan berbicara tentang keanggunan matematika, estetika yang tersirat, dan keindahan dari dalamnya. Kesederhanaan dan keumumannya dihargai. Terdapat keindahan di dalam kesederhanaan dan keanggunan bukti yang diberikan, semisal bukti Euclid yakni bahwa terdapat tak-terhingga banyaknya bilangan prima, dan di dalam metode numerik yang anggun bahwa perhitungan laju, yakni transformasi Fourier cepat. G. H. Hardy di dalam A Mathematician's Apology mengungkapkan keyakinan bahwa penganggapan estetika ini, di dalamnya sendiri, cukup untuk mendukung pengkajian matematika murni. Para matematikawan sering bekerja keras menemukan bukti teorema yang anggun secara khusus, pencarian Paul Erdős sering berkutat pada sejenis pencarian akar dari "Alkitab" di mana Tuhan telah menuliskan bukti-bukti kesukaannya. Kepopularan matematika rekreasi adalah isyarat lain bahwa kegembiraan banyak dijumpai ketika seseorang mampu memecahkan soal-soal matematika.

Notasi, bahasa, dan kekakuan

Sebagian besar notasi matematika yang digunakan saat ini tidaklah ditemukan hingga abad ke-16. Pada abad ke-18, Euler bertanggung jawab atas banyak notasi yang digunakan saat ini. Notasi modern membuat matematika lebih mudah bagi para profesional, tetapi para pemula sering menemukannya sebagai sesuatu yang mengerikan. Terjadi pemadatan yang amat sangat: sedikit lambang berisi informasi yang kaya. Seperti notasi musik, notasi matematika modern memiliki tata kalimat yang kaku dan menyandikan informasi yang barangkali sukar bila dituliskan menurut cara lain.

Bahasa matematika dapat juga terkesan sukar bagi para pemula. Kata-kata seperti atau dan hanya memiliki arti yang lebih presisi daripada di dalam percakapan sehari-hari. Selain itu, kata-kata semisal terbuka dan lapangan memberikan arti khusus matematika. Jargon matematika termasuk istilah-istilah teknis semisal homomorfisme dan terintegralkan. Tetapi ada alasan untuk notasi khusus dan jargon teknis ini: matematika memerlukan presisi yang lebih dari sekadar percakapan sehari-hari. Para matematikawan menyebut presisi bahasa dan logika ini sebagai "kaku" (rigor).
Lambang ketakhinggaan ∞ di dalam beberapa gaya sajian.

Kaku secara mendasar adalah tentang bukti matematika. Para matematikawan ingin teorema mereka mengikuti aksioma-aksioma dengan maksud penalaran yang sistematik. Ini untuk mencegah "teorema" yang salah ambil, didasarkan pada praduga kegagalan, di mana banyak contoh pernah muncul di dalam sejarah subjek ini. Tingkat kekakuan diharapkan di dalam matematika selalu berubah-ubah sepanjang waktu: bangsa Yunani menginginkan dalil yang terperinci, namun pada saat itu metode yang digunakan Isaac Newton kuranglah kaku. Masalah yang melekat pada definisi-definisi yang digunakan Newton akan mengarah kepada munculnya analisis saksama dan bukti formal pada abad ke-19. Kini, para matematikawan masih terus beradu argumentasi tentang bukti berbantuan-komputer. Karena perhitungan besar sangatlah sukar diperiksa, bukti-bukti itu mungkin saja tidak cukup kaku.

Aksioma menurut pemikiran tradisional adalah "kebenaran yang menjadi bukti dengan sendirinya", tetapi konsep ini memicu persoalan. Pada tingkatan formal, sebuah aksioma hanyalah seutas dawai lambang, yang hanya memiliki makna tersirat di dalam konteks semua rumus yang terturunkan dari suatu sistem aksioma. Inilah tujuan program Hilbert untuk meletakkan semua matematika pada sebuah basis aksioma yang kokoh, tetapi menurut Teorema ketaklengkapan Gödel tiap-tiap sistem aksioma (yang cukup kuat) memiliki rumus-rumus yang tidak dapat ditentukan; dan oleh karena itulah suatu aksiomatisasi terakhir di dalam matematika adalah mustahil. Meski demikian, matematika sering dibayangkan (di dalam konteks formal) tidak lain kecuali teori himpunan di beberapa aksiomatisasi, dengan pengertian bahwa tiap-tiap pernyataan atau bukti matematika dapat dikemas ke dalam rumus-rumus teori himpunan.

Matematika sebagai ilmu pengetahuan
Carl Friedrich Gauss mengatakan matematika sebagai "Ratunya Ilmu Pengetahuan". Di dalam bahasa aslinya, Latin Regina Scientiarum, juga di dalam bahasa Jerman Königin der Wissenschaften, kata yang bersesuaian dengan ilmu pengetahuan berarti (lapangan) pengetahuan. Jelas, inipun arti asli di dalam bahasa Inggris, dan tiada keraguan bahwa matematika di dalam konteks ini adalah sebuah ilmu pengetahuan. Pengkhususan yang mempersempit makna menjadi ilmu pengetahuan alam adalah di masa terkemudian. Bila seseorang memandang ilmu pengetahuan hanya terbatas pada dunia fisika, maka matematika, atau sekurang-kurangnya matematika murni, bukanlah ilmu pengetahuan. Albert Einstein menyatakan bahwa "sejauh hukum-hukum matematika merujuk kepada kenyataan, maka mereka tidaklah pasti; dan sejauh mereka pasti, mereka tidak merujuk kepada kenyataan."

Banyak filsuf yakin bahwa matematika tidaklah terpalsukan berdasarkan percobaan, dan dengan demikian bukanlah ilmu pengetahuan per definisi Karl Popper. Tetapi, di dalam karya penting tahun 1930-an tentang logika matematika menunjukkan bahwa matematika tidak bisa direduksi menjadi logika, dan Karl Popper menyimpulkan bahwa "sebagian besar teori matematika, seperti halnya fisika dan biologi, adalah hipotetis-deduktif: oleh karena itu matematika menjadi lebih dekat ke ilmu pengetahuan alam yang hipotesis-hipotesisnya adalah konjektur (dugaan), lebih daripada sebagai hal yang baru." Para bijak bestari lainnya, sebut saja Imre Lakatos, telah menerapkan satu versi pemalsuan kepada matematika itu sendiri.

Sebuah tinjauan alternatif adalah bahwa lapangan-lapangan ilmiah tertentu (misalnya fisika teoretis) adalah matematika dengan aksioma-aksioma yang ditujukan sedemikian sehingga bersesuaian dengan kenyataan. Faktanya, seorang fisikawan teoretis, J. M. Ziman, mengajukan pendapat bahwa ilmu pengetahuan adalah pengetahuan umum dan dengan demikian matematika termasuk di dalamnya. Di beberapa kasus, matematika banyak saling berbagi dengan ilmu pengetahuan fisika, sebut saja penggalian dampak-dampak logis dari beberapa anggapan. Intuisi dan percobaan juga berperan penting di dalam perumusan konjektur-konjektur, baik itu di matematika, maupun di ilmu-ilmu pengetahuan (lainnya). Matematika percobaan terus bertumbuh kembang, mengingat kepentingannya di dalam matematika, kemudian komputasi dan simulasi memainkan peran yang semakin menguat, baik itu di ilmu pengetahuan, maupun di matematika, melemahkan objeksi yang mana matematika tidak menggunakan metode ilmiah. Di dalam bukunya yang diterbitkan pada 2002 A New Kind of Science, Stephen Wolfram berdalil bahwa matematika komputasi pantas untuk digali secara empirik sebagai lapangan ilmiah di dalam haknya/kebenarannya sendiri.

Pendapat-pendapat para matematikawan terhadap hal ini adalah beraneka macam. Banyak matematikawan merasa bahwa untuk menyebut wilayah mereka sebagai ilmu pengetahuan sama saja dengan menurunkan kadar kepentingan sisi estetikanya, dan sejarahnya di dalam tujuh seni liberal tradisional; yang lainnya merasa bahwa pengabaian pranala ini terhadap ilmu pengetahuan sama saja dengan memutar-mutar mata yang buta terhadap fakta bahwa antarmuka antara matematika dan penerapannya di dalam ilmu pengetahuan dan rekayasa telah mengemudikan banyak pengembangan di dalam matematika. Satu jalan yang dimainkan oleh perbedaan sudut pandang ini adalah di dalam perbincangan filsafat apakah matematika diciptakan (seperti di dalam seni) atau ditemukan (seperti di dalam ilmu pengetahuan). Adalah wajar bagi universitas bila dibagi ke dalam bagian-bagian yang menyertakan departemen Ilmu Pengetahuan dan Matematika, ini menunjukkan bahwa lapangan-lapangan itu dipandang bersekutu tetapi mereka tidak seperti dua sisi keping uang logam. Pada tataran praktisnya, para matematikawan biasanya dikelompokkan bersama-sama para ilmuwan pada tingkatan kasar, tetapi dipisahkan pada tingkatan akhir. Ini adalah salah satu dari banyak perkara yang diperhatikan di dalam filsafat matematika.

Penghargaan matematika umumnya dipelihara supaya tetap terpisah dari kesetaraannya dengan ilmu pengetahuan. Penghargaan yang adiluhung di dalam matematika adalah Fields Medal (medali lapangan), dimulakan pada 1936 dan kini diselenggarakan tiap empat tahunan. Penghargaan ini sering dianggap setara dengan Hadiah Nobel ilmu pengetahuan. Wolf Prize in Mathematics, dilembagakan pada 1978, mengakui masa prestasi, dan penghargaan internasional utama lainnya, Hadiah Abel, diperkenalkan pada 2003. Ini dianugerahkan bagi ruas khusus karya, dapat berupa pembaharuan, atau penyelesaian masalah yang terkemuka di dalam lapangan yang mapan. Sebuah daftar terkenal berisikan 23 masalah terbuka, yang disebut "masalah Hilbert", dihimpun pada 1900 oleh matematikawan Jerman David Hilbert. Daftar ini meraih persulangan yang besar di antara para matematikawan, dan paling sedikit sembilan dari masalah-masalah itu kini terpecahkan. Sebuah daftar baru berisi tujuh masalah penting, berjudul "Masalah Hadiah Milenium", diterbitkan pada 2000. Pemecahan tiap-tiap masalah ini berhadiah US$ 1 juta, dan hanya satu (hipotesis Riemann) yang mengalami penggandaan di dalam masalah-masalah Hilbert.

Bidang-bidang matematika
Disiplin-disiplin utama di dalam matematika pertama muncul karena kebutuhan akan perhitungan di dalam perdagangan, untuk memahami hubungan antarbilangan, untuk mengukur tanah, dan untuk meramal peristiwa astronomi. Empat kebutuhan ini secara kasar dapat dikaitkan dengan pembagian-pembagian kasar matematika ke dalam pengkajian besaran, struktur, ruang, dan perubahan (yakni aritmetika, aljabar, geometri, dan analisis). Selain pokok bahasan itu, juga terdapat pembagian-pembagian yang dipersembahkan untuk pranala-pranala penggalian dari jantung matematika ke lapangan-lapangan lain: ke logika, ke teori himpunan (dasar), ke matematika empirik dari aneka macam ilmu pengetahuan (matematika terapan), dan yang lebih baru adalah ke pengkajian kaku akan ketakpastian.
[sunting] Besaran

Pengkajian besaran dimulakan dengan bilangan, pertama bilangan asli dan bilangan bulat ("semua bilangan") dan operasi aritmetika di ruang bilangan itu, yang dipersifatkan di dalam aritmetika. Sifat-sifat yang lebih dalam dari bilangan bulat dikaji di dalam teori bilangan, dari mana datangnya hasil-hasil popular seperti Teorema Terakhir Fermat. Teori bilangan juga memegang dua masalah tak terpecahkan: konjektur prima kembar dan konjektur Goldbach.

Karena sistem bilangan dikembangkan lebih jauh, bilangan bulat diakui sebagai himpunan bagian dari bilangan rasional ("pecahan"). Sementara bilangan pecahan berada di dalam bilangan real, yang dipakai untuk menyajikan besaran-besaran kontinu. Bilangan real diperumum menjadi bilangan kompleks. Inilah langkah pertama dari jenjang bilangan yang beranjak menyertakan kuarternion dan oktonion. Perhatian terhadap bilangan asli juga mengarah pada bilangan transfinit, yang memformalkan konsep pencacahan ketakhinggaan. Wilayah lain pengkajian ini adalah ukuran, yang mengarah pada bilangan kardinal dan kemudian pada konsepsi ketakhinggaan lainnya: bilangan aleph, yang memungkinkan perbandingan bermakna tentang ukuran himpunan-himpunan besar ketakhinggaan.

Ruang
Pengkajian ruang bermula dengan geometri – khususnya, geometri euclid. Trigonometri memadukan ruang dan bilangan, dan mencakupi Teorema pitagoras yang terkenal. Pengkajian modern tentang ruang memperumum gagasan-gagasan ini untuk menyertakan geometri berdimensi lebih tinggi, geometri tak-euclid (yang berperan penting di dalam relativitas umum) dan topologi. Besaran dan ruang berperan penting di dalam geometri analitik, geometri diferensial, dan geometri aljabar. Di dalam geometri diferensial terdapat konsep-konsep buntelan serat dan kalkulus lipatan. Di dalam geometri aljabar terdapat penjelasan objek-objek geometri sebagai himpunan penyelesaian persamaan polinom, memadukan konsep-konsep besaran dan ruang, dan juga pengkajian grup topologi, yang memadukan struktur dan ruang. Grup lie biasa dipakai untuk mengkaji ruang, struktur, dan perubahan. Topologi di dalam banyak percabangannya mungkin menjadi wilayah pertumbuhan terbesar di dalam matematika abad ke-20, dan menyertakan konjektur poincaré yang telah lama ada dan teorema empat warna, yang hanya "berhasil" dibuktikan dengan komputer, dan belum pernah dibuktikan oleh manusia secara manual.

Perubahan
Memahami dan menjelaskan perubahan adalah tema biasa di dalam ilmu pengetahuan alam, dan kalkulus telah berkembang sebagai alat yang penuh-daya untuk menyeledikinya. Fungsi-fungsi muncul di sini, sebagai konsep penting untuk menjelaskan besaran yang berubah. Pengkajian kaku tentang bilangan real dan fungsi-fungsi berpeubah real dikenal sebagai analisis real, dengan analisis kompleks lapangan yang setara untuk bilangan kompleks. Hipotesis Riemann, salah satu masalah terbuka yang paling mendasar di dalam matematika, dilukiskan dari analisis kompleks. Analisis fungsional memusatkan perhatian pada ruang fungsi (biasanya berdimensi tak-hingga). Satu dari banyak terapan analisis fungsional adalah mekanika kuantum. Banyak masalah secara alami mengarah pada hubungan antara besaran dan laju perubahannya, dan ini dikaji sebagai persamaan diferensial. Banyak gejala di alam dapat dijelaskan menggunakan sistem dinamika; teori kekacauan mempertepat jalan-jalan di mana banyak sistem ini memamerkan perilaku deterministik yang masih saja belum terdugakan.

Struktur
Banyak objek matematika, semisal himpunan bilangan dan fungsi, memamerkan struktur bagian dalam. Sifat-sifat struktural objek-objek ini diselidiki di dalam pengkajian grup, gelanggang, lapangan dan sistem abstrak lainnya, yang mereka sendiri adalah objek juga. Ini adalah lapangan aljabar abstrak. Sebuah konsep penting di sini yakni vektor, diperumum menjadi ruang vektor, dan dikaji di dalam aljabar linear. Pengkajian vektor memadukan tiga wilayah dasar matematika: besaran, struktur, dan ruang. Kalkulus vektor memperluas lapangan itu ke dalam wilayah dasar keempat, yakni perubahan. Kalkulus tensor mengkaji kesetangkupan dan perilaku vektor yang dirotasi. Sejumlah masalah kuno tentang Kompas dan konstruksi garis lurus akhirnya terpecahkan oleh Teori galois.

Dasar dan filsafat
Untuk memeriksa dasar-dasar matematika, lapangan logika matematika dan teori himpunan dikembangkan, juga teori kategori yang masih dikembangkan. Kata majemuk "krisis dasar" mejelaskan pencarian dasar kaku untuk matematika yang mengambil tempat pada dasawarsa 1900-an sampai 1930-an.[28] Beberapa ketaksetujuan tentang dasar-dasar matematika berlanjut hingga kini. Krisis dasar dipicu oleh sejumlah silang sengketa pada masa itu, termasuk kontroversi teori himpunan Cantor dan kontroversi Brouwer-Hilbert.

Logika matematika diperhatikan dengan meletakkan matematika pada sebuah kerangka kerja aksiomatis yang kaku, dan mengkaji hasil-hasil kerangka kerja itu. Logika matematika adalah rumah bagi Teori ketaklengkapan kedua Gödel, mungkin hasil yang paling dirayakan di dunia logika, yang (secara informal) berakibat bahwa suatu sistem formal yang berisi aritmetika dasar, jika suara (maksudnya semua teorema yang dapat dibuktikan adalah benar), maka tak-lengkap (maksudnya terdapat teorema sejati yang tidak dapat dibuktikan di dalam sistem itu). Gödel menunjukkan cara mengonstruksi, sembarang kumpulan aksioma bilangan teoretis yang diberikan, sebuah pernyataan formal di dalam logika yaitu sebuah bilangan sejati-suatu fakta teoretik, tetapi tidak mengikuti aksioma-aksioma itu. Oleh karena itu, tiada sistem formal yang merupakan aksiomatisasi sejati teori bilangan sepenuhnya. Logika modern dibagi ke dalam teori rekursi, teori model, dan teori pembuktian, dan terpaut dekat dengan ilmu komputer teoretis.

Matematika diskret
Matematika diskret adalah nama lazim untuk lapangan matematika yang paling berguna di dalam ilmu komputer teoretis. Ini menyertakan teori komputabilitas, teori kompleksitas komputasional, dan teori informasi. Teori komputabilitas memeriksa batasan-batasan berbagai model teoretis komputer, termasuk model yang dikenal paling berdaya - Mesin turing. Teori kompleksitas adalah pengkajian traktabilitas oleh komputer; beberapa masalah, meski secara teoretis terselesaikan oleh komputer, tetapi cukup mahal menurut konteks waktu dan ruang, tidak dapat dikerjakan secara praktis, bahkan dengan cepatnya kemajuan perangkat keras komputer. Pamungkas, teori informasi memusatkan perhatian pada banyaknya data yang dapat disimpan pada media yang diberikan, dan oleh karenanya berkenaan dengan konsep-konsep semisal pemadatan dan entropi.

Matematika terapan
Matematika terapan berkenaan dengan penggunaan alat matematika abstrak guna memecahkan masalah-masalah konkret di dalam ilmu pengetahuan, bisnis, dan wilayah lainnya. Sebuah lapangan penting di dalam matematika terapan adalah statistika, yang menggunakan teori peluang sebagai alat dan membolehkan penjelasan, analisis, dan peramalan gejala di mana peluang berperan penting. Sebagian besar percobaan, survey, dan pengkajian pengamatan memerlukan statistika. (Tetapi banyak statistikawan, tidak menganggap mereka sendiri sebagai matematikawan, melainkan sebagai kelompok sekutu.) Analisis numerik menyelidiki metode komputasional untuk memecahkan masalah-masalah matematika secara efisien yang biasanya terlalu lebar bagi kapasitas numerik manusia; analisis numerik melibatkan pengkajian galat pemotongan atau sumber-sumber galat lain di dalam komputasi.

http://id.wikipedia.org/wiki/Matematika