Calvin Cycle 2.0

An article recently came out about a a team which used a series of enzymes in a test tube to do something similar to what plants do in carbon fixation. However, the process is not entirely the same in that it uses different enzymes from a variety of sources, including animals and bacteria, to complete the reaction chain. This chimera test tube of enzymes is apparently 25% more efficient than its natural competitor, RuBisCo. Engineering a system more efficient than this shouldn’t come as too much of a surprise because it is actually one of the least efficient enzymes in nature, as I have written previously (see also):

It is indisputable that CO2 concentrations in the atmosphere are increasing, and the burning of fossil fuels causes some or most of it.  However, CO2 is a natural part of the life cycle. Plants fixate CO2 from the atmosphere in order to grow. RuBisCo is the enzyme which fixes gaseous carbon into simple sugars in plants. This is arguably the single most important enzyme in existence. In addition to plants themselves, all animals and fungi, and most bacteria, are dependent on this enzyme working. It creates the food for those organisms. It also happens to be one of the least efficient enzymes. That is, it doesn’t work very well at doing its job, surprisingly. For one thing, it is very slow. RuBisCo is also capable of catalyzing oxygenation of its substrate rather than fixing a carbon dioxide molecule and it does so at fairly high rates. When oxygenation occurs, the energy is completely wasted because the byproduct isn’t useful for the plant. Moreover, energy has to be expended to reverse the process to make the substrate available for carbon fixation again. Some plants have even evolved special CO2 concentrating mechanisms to try to combat this problem. Increasing the carbon dioxide concentration of the air via burning fossil fuels should make plants better able to use this enzyme because increased concentration of the CO2 substrate increases the enzyme’s efficiency. For example, by increasing the likelihood that CO2 will be fixed rather than oxygen molecules. In other words, the expected result of increased carbon concentrations should be bigger plants, faster growing plants, and/or larger numbers of plants. Both agricultural and wild plants could be expected to benefit from this.

Intelligently designing a better system than RuBisCo, then, is seemingly one of the lowest bars in advanced genetics to cross. Not that that makes it easy in an absolute sense, only that it is easier than, say, designing human geniuses. Getting all these enzymes lined up physically and working well together in a chloroplast is no small barrier.

The obvious purpose of this work and research is to combat climate change. Personally, I am very skeptical that climate change as caused by released CO2 is actually something to worry about. I am inclined to think CO2 hysteria is more an expression of crypto-theology. So, the motivations for this work are suspect. That said, I could still see it being useful. Imagine the kinds of crops we could get if we made them 25% more efficient? What if we could use it to generate a very cheap source of organic fuels and/or starting chemical reagents. Even ignoring overblown warnings about an apocalypse, there is still potential use in this technology. I see no reason not to switch from fossil fuels to better sources if they are in fact better and cheaper.

However, there are other environmental considerations than climate change to at least think about. The first thing that comes to mind is what would be the consequences of introducing vastly more efficient plants into the wild, whether deliberate or inadvertent? Such a plant would presumably be at least a little more evolutionarily fit than its wild counterparts and potentially vastly more fit. If so, it could potentially disrupt entire ecosystems on a massive scale in a relatively short time. Out-competed plants would die out and all the life dependent on those plants would follow shortly thereafter, if they couldn’t adapt to the new composition of their environment. The quest to stop climate change could have unintended consequences far outstripping the largely imagined climate apocalypse. However, even in this case I have little doubt life as a whole would adapt and move on even if the disruption is quite severe. I am reminded of this rather charming documentary (called Cane toads: the conquest if the link goes bad) about the introduction of Cane Toads to Australia and all the havoc that caused. Unintended consequences are real, friends, and leftists are masters at generating them.

All this of course is assuming that this new system would actually work as well as they hope it might (and that the plants it was introduced in were otherwise capable of fierce ecological competition in addition to the new fixation system). This is possible. Evolution is subject to path dependence. Once the initial system of carbon fixation evolved, it would be stuck with the basic mechanism and could only adapt from that in minute steps. It would be very difficult to transfer to a completely different system naturally via small steps. In other words, it might be possible to tweak RuBisCo towards more efficiency, but nearly impossible to substitute a whole different enzyme which was much better overall. A newly evolved system, even if potentially better after additional evolution, would likely start off as less efficient as the already long extent and well adapted one and thus would have a hard time sticking around long enough to become a proper better alternative. Therefore, it is quite possible that better systems than RuBisCo are possible yet still unevolved. Some things are quite difficult to evolve.

On the other hand, it is also quite possible that there are good biological reasons for this inefficiency that we don’t know about. If so, other considerations may prevent the newly developed system from working well and/or resulting in a net loss in fitness due to side effects. In which case it won’t work and there is nothing to worry about. Either way, great care should be taken before committing to the introduction of a vastly different system of doing things, and that applies to biology as much as to government.

Share Button

Race Hustlers on Reddit are spreading misinformation on /r/”science” again

/r/science on reddit is having a race hustler thread where cathedral academics are telling everyone the same tired nonsense that differences in outcomes between groups are caused by SES and discrimination by white men. I decided it would be valuable to post an excerpt from my book which discusses the left wing bias in academia because there is a lot of it and I don’t think these people should get a free pass on spreading misinformation like this. If you have a reddit account, I think it would be a good idea to go in and counter these false claims. There is plenty of evidence that IQ differences have genetic causes, and that thread should be littered with links to it.

Since the probability that my comments will be removed is high, I decided to make a copy of them here [edit: I checked and as expected the comments are already removed. Check out this site which saves all the removed comments. Notice a pattern? Edit2: Apparently this thread was made because high level progressives in science journals or media demanded it in order for the sub to have advance access to new announcements of interesting research. In other words, spread our ideology and we will give you insider access and other perks.]:

*******************************

IQ is the single best studied and understood psychological trait. It has been studied for over 100 years and has consistently found that the black/white IQ gap on average is about 15 points. It has also found that the male distribution is significantly more variable than the female curve. What this means is that among the smartest people, those most likely to do well in science, the population is about 2 males for every 1 female. Intelligence level has been shown to be mostly determined by genetics. You can see a large compilation of studies and findings which demonstrate how strongly genetic intelligence and other traits are here. These innate IQ differences explain the differences in outcomes of different populations of humans without needing to resort to unfalsifiable and unscientific concepts like “white privilege” or bias.

I gave you enough links above to start on this. Instead of repeating this easily findable information, I am going to talk about the progressive/far left wing bias that exists in academia. This bias seriously undermines the credibility of race and gender hustlers who try to use credentialism to support untrue opinions about white/male privilege. For an independent opinion, see the website of liberal social psychologist Jonathon Haidt where he and others admit to and discuss the problems caused by this progressive bias.

I wrote a book on gender differences in intelligence called smart and sexy, and it cites hundreds of studies which together confirm and explain why gender differences in outcomes exist and that they are mostly biological in origin. You can find a wealth of data in there for biological mechanisms. However, I also took the time to analyze the current state of academia and what I found was extremely troubling. Below is an excerpt of a section of that book. Remember that the focus of the book is on gender, but this left-wing bias also applies to claims about race.  Citations for all my claims will be at the end.

Saying that the academic community has a large progressive bias is a very strong claim and such an extraordinary claim requires extraordinary evidence. So what is known about the “scientists” who publish “research” in politically charged areas? Diederick Stapel was previously a highly regarded and influential Dutch social psychologist who did a lot of work on stereotype threat until it came to light that he “routinely falsified data and made up entire experiments.” Another example of his politically biased work was a “scientific” article which sanctimoniously claimed to find that meat eaters were more selfish and less agreeable than vegans. Unfortunately, it is impossible to be surprised by outspoken priggishness from vegans and their sympathizers.

Thanks to this media attention, Stapel is now the most notorious charlatan in the field of social psychology, which is saying a lot for what appears to be a regularly fraudulent and pseudo-scientific discipline. Social Psychologists as a group do not make the data they collect available for outside review 2/3rds of the time. This stinginess with data is actually against the ethical rules established by social psychologists themselves and suggests that there are likely many more Stapels out there who simply haven’t been caught. A survey by the Harvard business school found that 70% of social psychologists admitted to cutting corners in reporting data, 30% reporting unexpected findings as if they were expected from the start, and 1% admitted to falsifying data.

Another meta-analysis of papers published in high-tier psychology journals found that 50% of papers surveyed contained at least one statistical error and 15% contained an error so severe that the conclusion drawn would have had to have been reversed.i, ii A meta-analysis which looked at whether or not positive results from stereotype threat studies could be replicated found that almost half could not, and that a further 25% were confounded by methodological issues.iii Methodological issues, especially in determining statistical validity, have even been used by one Social Psychologist to publish in a major, respected journal that he had proven the existence of psychic ability. His finding used standard statistical practices in psychology and resulted in heavy criticism by professional statisticians of both the specific paper and the psychology community generally.iv

This high publicity criticism led to a fair degree of soul searching among the psychological community and led some researchers to undertake the task of evaluating how widespread these problems are. One analysis reviewed articles from the last 100 years in the top 100 journals based on the impact factor; a measure of the level of influence a paper or journal has on the field. It found that in that time, for the highest impact journals, only 1% of all research findings in psychology had ever been replicated. Of that 1%, only 14% were in fact direct replications. The rest tested similar hypotheses under different conditions. However, successful replications themselves have to be received critically. Half of the 1% of replications had authors from the original study; this is troubling because the presence of the previous author greatly impacts the chance of positive replication and implies bias might be playing a role. 92% of replication studies with an author from the original paper confirm the original result, while only 65% of replications by independent researchers confirm the original finding.v

Problematic methodology isn’t the only issue in psychology. Ideological bias is rampant in the humanities generally, but especially in social psychology; both among individual researchers and among the journals publishing papers. Beyond the lack of objective critical evaluation of papers, the field itself is essentially an ideological and political echo-chamber that is considerably more left-wing politically than the general population. 80% of social psychologists identify as liberal, while only 3 out of 1000 identify as conservative. Contrast this with the general population which is 40% conservative and only 20% liberal; the remainder being moderate or apolitical. Looking through all social sciences, the ratio of liberals to conservatives varies from 8:1 to 30:1.vi Were these sorts of numbers occurring with an ideologically designated protected class, these same social psychologists would be the first to use it as incontrovertible proof of discrimination.vii, viii

Considering what is now known about the biological origins of cognition and intelligence (discussed in more detail in future sections), it is generally difficult to take claims of discrimination seriously when underrepresented groups also display relatively lower intelligence profiles. However, in this case there is no reason to think that conservatives as a group have an intellectual profile below the general population. Social conservatives tend to be a little lower in intelligence relative to liberals, but free-market conservatives (libertarians) tend to be smarter than liberals. Being very partisan, either liberal or conservative, tends to be associated with high IQ as well.ix Increased income levels, which are a proxy for IQ, also moves people right ideologically.x In other words, there is nothing that biologically determined intelligence can do to explain the lack of conservatives, and even moderates, in the humanities.xi

In a survey of social psychologists, it was found that conservative respondents feared negative consequences from revealing their political affiliation and that they were right to do so as liberal respondents expressed willingness to discriminate against conservatives in approving papers, grant proposals, and hiring decisions.xii The more liberal a social psychologist is or the more consequential the decision would be for the conservative, the more willing liberal social psychologists are to discriminate.

The temptation . . . to advance a political agenda is too often indulged in sociology, especially by activist faculty in certain fields, like marriage, family, sex, and gender . . . Research programs that advance narrow agendas compatible with particular ideologies are privileged . . . the influence of progressive orthodoxy in sociology is evident in decisions made by graduate students, junior faculty, and even senior faculty about what, why, and how to research, publish, and teach . . . The result is predictable: Play it politically safe, avoid controversial questions, publish the right conclusions…

[Compared to conservative sociologists] Politically-correct sociologists enjoy certain privileges in a very politically conscious and liberal discipline. They can, for example, “paint caricature-like pictures based on the most extreme and irrational beliefs of those who differ from them ideologically without feeling any penalty for doing so,” and “can systematically misinterpret, misrepresent, or ignore research in such a manner as to sustain [their] political views and be confident that such misinterpretations . . . are unlikely to be recognized by [their] colleagues” [Social science researchers believe] “that social science should be an instrument for social change and thus should promote the ‘correct’ values and ideological positions”vi

With this sort of cultural climate, exploring gender differences, or even just acknowledging that such differences exist is extremely difficult for professional scientists to do today. The pattern of ideologically driven academics significantly undermines the ability of an objective outsider to trust the conclusions coming out of certain fields, especially when it is related to such a politically charged subject as gender (and race) differences in test scores. It is quite clear that the overwhelming majority of researchers working on this topic possess a politically desired outcome of these studies. The great potential for this systemic Lysenkoism to motivate the production of inaccurate results and interpretations contrary to reality can’t be overestimated. The objectivity of the field in concluding stereotype threat is a real and large effect phenomenon in particular is highly questionable.

Calling cynical skepticism of the social sciences “anti-intellectual,” a common criticism directed towards conservative thinkers, is only so in the sense that these “scientists” have misdefined the word “intellectual” to describe their political ideology and therefore themselves. It is quite conceivable that the modern attitudes described as “anti-science” attributed to conservatives are fundamentally merely a non-inevitable reaction to what can only be described as pseudo-science being published by leftist activists in academia; and stereotype threat is just one example of peer-reviewed pseudo-science.xi

Certainly in some cases there are conservatives that legitimately hold anti-scientific views, such as in the case of evolution generally. But when it comes to evolution of the human species specifically, many liberals are just as anti-scientific as the most hardcore creationist. The main difference is that the left, being dominant in state institutions and having ample government funding, has the power to enforce idealism contrary to reality while most conservatives do not have symmetric influence. This asymmetry in power makes leftist anti-reality beliefs of far greater concern and consequence than the equivalent conservative anti-reality beliefs.

For the average person, it isn’t so hard to notice some of the more egregious examples of leftist pseudo-science. Since most people do not have the time or energy to independently evaluate every pronouncement from every field coming out of the scientific community, it is more efficient (and natural) to use a quick short-hand, or stereotype, to extrapolate from a more narrow range of data for which they do have time and interest to look into. If their interest happens to be in an area replete with pseudo-science, and that’s likely because politically controversial areas are both the most likely to be interesting and to contain pseudo-science, then they have found themselves an extraordinary indicator of dishonesty which they then extrapolate from.

As a consequence of general distrust, society is more likely to develop unreasonable movements like that against vaccinations. It is not reasonable for the scientific community to expect the average person to evaluate every single scientific finding themselves. They have real lives that do not, and should not, have to deal with academic politics. Therefore, scientists need to do a better job rooting out bias, and especially liberal bias, in their fields so the public can actually trust what they say. If academics want to be trusted, they first must be trustworthy because trust, for institutions as much as individuals, must be earned.

I don’t mean to be misinterpreted when I point out these biases in scientific research. To their credit, the main people who have identified and raised alarm about the bias against non-liberals in academic papers have themselves been liberal social psychologists such as Jonathon Haidt. In fields that are outside of the social sciences or on the periphery, real bravery is often demonstrated in their defiance of orthodoxy. Perhaps my favorite treatment of Cultural Marxism came from a paper which starts by stating “putting aside political correctness” and then continues on to discuss multiple heretical topics and never references it again. Political correctness is mentioned only long enough to dismiss it as the irrational and fallacious sentiment that it is. This is a hopeful sign, but it must be noted that no serious efforts to actively alleviate the problem within the social sciences beyond talking about it have so far been undertaken.

I have a great respect for science generally and see it as the best method so far developed by humans to separate truth from fiction, at least when the core principles of scientific philosophy are actually followed. But the scientific establishment is still a human institution and therefore fallible. The community at times moves unacceptably far away from its core principles and this usually happens when research topics might have strong implications for an over-arching political ideology. The Lysenkoist effect of an overwhelmingly liberal character is just one problem. Another is that senior research scientists often spend as much or more time begging for money than they do actually trying to discover truth. Whether or not they actually get money is often dependent on how much they publish which creates an incentive to publish even if the research isn’t very good. Conforming to the political biases of other researchers thus constitutes a quick way to look better with lower quality research.

From the state of academia, it can be taken that the discrimination hypothesis has a great deal of influence on our current culture and the determination of public policy through the publication of questionable research. If the discrimination hypothesis is only partially true or largely wrong in the present, then social policies based on it are likely to be largely ineffectual and possibly harmful. Intelligence researcher Dr. Wendy Johnson has stated the importance of this possibility with reference to X linked intelligence succinctly,

Values create the emotionally charged climates pervading discussions of sex differences, making it difficult to evaluate scientific data objectively. Values are extremely important and appropriately form the basis of many actions and social contracts. But the laws of nature are not responsible to us or to our values and may not conform to them. It is important to understand the laws of nature as completely as possible within our circumstances in order to actualize our values as we intend. We can only develop coherent and realistic actions and social policies that will actualize our values if we understand the laws of nature as they exist.ii

Wicherts, J. Bakker, M. (2011) The (mis)reporting of statistical results in psychology journals. Behav Res Methods. 43(3): 666–678.

Franklin, K. (2011) Psychology rife with inaccurate research findings. Psychology today. http://www.psychologytoday.com/blog/witness/201111/psychology-rife-inaccurate-research-findings

Stoet, G., Geary, D. (2012) Can stereotype threat explain the gender gap in mathematics performance and achievement? Review of general psychology. Vol 16(1), 93-102

Wagenmakers, E., Wetzels, R., Borsboon, D., Van der Mass, H. (2011) Why psychologists must change the way they analyze their data: the case of psi: Comment on Bem. Journal of Personallity and Social Psychology. Vol 100(3). 426-432.

Makel, M., Plucker, J., Hegarty, B. (2012) Replications in psychology research: How often do they really occur? Perspectives on psychological science.Vol 7(6). 537-542.

Redding. R. (2013) Politicized Science. Society. Vol 50(5), 439-446

Haidt, J., Post-Partisan Social Psychology. http://people.stern.nyu.edu/jhaidt/postpartisan.html

Tierny, J. (2011) Social Scientist Sees Bias Within. New York Times. http://www.nytimes.com/2011/02/08/science/08tier.html?_r=5&ref=science&

Kemmelmeier, M. (2008) Is there a relationship between political orientation and cognitive ability? A test of three hypotheses in two studies. Personality and Individual Differences.Vol 45(8), 767–772

Morton, R., Tyran, J., Wengström, E. (2011) Income and Ideology: How Personality Traits, Cognitive Abilities, and Education Shape Political Attitudes. Univ. of Copenhagen Dept. of Economics Discussion Paper No. 11-08. Available at SSRN: http://ssrn.com/abstract=1768822 or http://dx.doi.org/10.2139/ssrn.1768822

Duarte, J., Crawford, J., Stern, C., Haidt, J., Jussim, L., Tetlock, P. (2014) Political Diversity will Improve Social Psychology. Behav Brain Sci. Vol 18. 1-54

Inbar, Y. & Lammers, J. (2012).  Political diversity in social and personality psychology.  Perspectives on Psychological Science, 7, 496-503.

Abramowitz, S. I., Gomes, B., Abramowitz, C. V. (1975), Publish or Politic: Referee Bias in Manuscript Review. Journal of Applied Social Psychology, 5: 187–200. doi: 10.1111/j.1559-1816.1975.tb00675.x

Ceci, S. J., Peters, D., Plotkin, J. K., Alan E., (1992). Human subjects review, personal values, and the regulation of social science research. Methodological issues & strategies in clinical research., American Psychological Association, 687-704

Crawford, J. T, Jussim, L., Cain, T. R., Cohen, F.  (2013).  Right-wing authoritarianism and social dominance orientation differentially predict biased evaluations of media reports.  Journal of Applied Social Psychology, 43, 163-174.

Munro, G. D., Lasane, T. P. and Leary, S. P. (2010), Political Partisan Prejudice: Selective Distortion and Weighting of Evaluative Categories in College Admissions Applications. Journal of Applied Social Psychology, 40: 2434–2462. doi: 10.1111/j.1559-1816.2010.00665.x

Rothman, S., Lichter, S. R., Nevitte, N. (2005) Politics and Professional Advancement Among College Faculty. The Forum. Vol 3(1). Article 2.

Harvard sex row and science. BBC News. Jan 18, 2005. http://news.bbc.co.uk/2/hi/uk_news/education/4183495.stm

Lallensack, R. (2014) UW to host first feminist biology post-doc program in the nation. The Badger Herald. http://badgerherald.com/news/2014/04/21/madison-host-first-feminist-biology-post-doc-program-nation-rl/#.VO5GIS6GN8H

Pinker, S. (2009  Letter from Steven Pinker to Aarhus University in defence of Prof. Nyborg, December 9, 2009. http://www.helmuthnyborg.dk/Letters-Of-Support/PinkerLetter.pdf

Nyborg, H. (2013) Danish Government Tries to Censor Science it Doesn’t Like. American Renaissance, November 14, 2013 http://www.amren.com/news/2013/11/danish-government-tries-to-censor-science-it-doesnt-like/

Thompsom, J., (2013) Helmuth Nyborg gets Watson’d. Psychological Comments. http://drjamesthompson.blogspot.com/2013/11/helmuth-nyborg-gets-watsond.html

Nyborg, H. (2003) “The Sociology of Psychometric and Bio-behavioral Sciences: A Case Study of Destructive Social Reductionism and Collective Fraud in 20th Century Academia.” The Scientific Study of General Intelligence: Tribute to Arthur R. Jensen.

Nyborg, H., The Greatest Collective Scientific Fraud of the 20th Century: The Demolition of Differential Psychology and Eugenics. The Mankind Quarterly. University of Aarhus (Retired, 2007)

Share Button

One of my favorite twitter accounts deleted (On peer review and social “science”)

Recently, one of my favorite twitter accounts was deleted. The name of the account was @realpeerreview and the main focus of this account was to take excerpts from truly retarded social “science” research papers and show how stupidly our tax money is being wasted. You can see an archive of his work here and an archive of the archive in case that goes down. I would read through all those. It is both hilarious and infuriating at the same time. Why does the public have to pay for such stupidity?

Of course, other social “scientists” didn’t like public attention to their “research” and threatened to doxx the individual broadcasting how atrocious most social “science” research actually is. These cockroaches prefer to stay in dark under the rug and don’t like anyone lifting up a corner. @realpeerreview was, I guess, left no choice but to back off because his career was possibly on the line. Some one else quickly snatched up the account name after finding out about this typically leftist hiding of the truth to continue on the good work. I don’t know if she will do as good a job, but I hope she can give them hell.

Anyway, just one more example why academia, at least in the humanities, should not be trusted at all. I have written on stuff like this before on how stereotype threat is bunk and how standardized tests are rigged against boys, however, it was nice to see how widespread the sickness is. It spans hundreds of papers and hundreds of topics. Defunding the humanities would wipe out 100 times more cancerous tumor than healthy tissue. It is time for some emergency surgery.

Share Button

Smart and SeXy

Smart and SeXy: The Evolutionary origins and biological underpinnings of cognitive differences between the sexes

The soft cover edition is available here. If you are on a budget you can also download the E-book. You can read the amerika.org review here and the counter-currents review here.

This is probably the most heretical work I have ever or will ever put to writing personally, and probably one of the most heretical things from the perspective of progressives, feminists and any other member of the cathedral available anywhere. If you want a no-nonsense (i.e., no feminism) description of sex differences, then you will probably enjoy the information contained within. If you have questions about what exactly the gender differences in intelligence are, by what fairly exact biological mechanisms they come about, and what potential evolutionary narratives explain what we observe, then this is the book for you. After reading this book you will not only know the current patterns of sex differences in intelligence as shown by psychometric tests, but why and how the underlying biology explains the patterns we observe. At the heart of the differences is both genetic and hormonal elements which work in concert to generate what we see on an every day basis. It has taken years of work (since 2011) and hundreds of hours invested in reading hundreds of dry academic papers to compile the more than 300 sources included, but I did so you can have the evidence all in one place and explained in lay terms. And perhaps most importantly, to have the evidence for gender differences in intelligence without muddying the waters with the foul taint of feminism.

At the heart of The Red Pill and the Dark Enlightenment, when thinking about women, is a kernal which grows to support everything else; all the theory on game, marriage, etc. All higher level knowledge is dependent on it. The fundamental concept, or more accurately the anti-concept, is the rejection of Equality. Egalitarianism just isn’t so. Men and women aren’t equal and they aren’t the same. Knowing they are not equal allows correct understanding of the world and relationships from successful one night stands to successful marriages. The entirety of the manosphere and red pill are dependent on this insight. The Dark Enlightenment is also dependent on this insight, but they expand it to include not only sex differences but ethnic differences as well.

Having that level of dependence on that initial small kernal can present a problem when it isn’t sufficiently supported by evidence. Though there is this and that study which suggests in a minor way that gender equality is false, it is my view that such information as bolsters the rejection of egalitarianism when it comes to men and women is lacking sufficient centralization within the manosphere and neoreactionary community. There may be thousands of individual blog posts on the topic, but mostly each one only addresses a small part of the big picture and getting the entirety of the picture from these diffused writings can be more difficult than it needs to be. The known facts are sufficiently dispersed, unorganized, and lacking in coherence that it makes the kernal a source of vulnerability to criticism from the outside. It is, as it were, a chink in our armor that needs to be addressed.

You might think “there is plenty of evidence.” Sure, there is. But, in all honesty, do we (the community more than geneticists) REALLY understand the mechanism? How exactly, at the molecular level, does inequality between men and women come about? It is an important question, and until it is answered so rigorously and thoroughly that it can’t be denied this will always be a vulnerability in our position. This is why I wrote this book. It is meant to be the titanium plate to cover our chink in the armor. This book coheres the currently available data into a single place and a single narrative that is relatively easy to access and difficult to refute. Moreover, and unlike most feminist theories, it presents a testable hypothesis. The genetic explanation for sex differences in intelligence I propose is something that biologists and geneticists can design experiments to test in order to prove or disprove. By making this hypothesis known to the mainstream it forces scientists to directly test the hypothesis. At least that is my hope. Prior evidence suggests what the result of such testing will be.

Another point of this book is to attempt to put to rest once and for all the idea that disparities in achievement between men and women have a chiefly cultural origin; they don’t. The differences between men and women are almost exclusively due to biology. Once society accepts that women aren’t going to ever achieve at the same rate as men, we can stop wasting time and resources promoting women, via affirmative action, into positions and occupations they are not suited for; thus saving a lot of effort and wealth that is currently getting wasted. We might also be able to get the birthrate back up to a more stable level and thus avoid demographic problems.

Lastly, to a certain extent it is meant to be a handbook for those who might be faced with deliberation on the topic and who need to quickly reference one type of study or another to demonstrate biological reality. I have made herculean efforts to make this as readable as possible and I believe I have done a good job with this, but I have placed greater emphasis on including as much relevant information with proper citations to credible journals as possible. I wanted to give people knowledge of which studies they need to cite for their particular argument or discussion in one convenient and accessible place.

Who to thank?

I owe some twisted gratitude to progressive academics who through their push to shun and silence me in the name of political correctness gave me the motivation I needed to write this book contrary to their culturally Marxist fantasies. On multiple occasions I have been personally screwed over by people holding that ideology because I was so audacious as to merely mention I had read The Bell Curve and found the points within to be worth consideration. I didn’t even claim to agree with it, just that it is a hypothesis which needs to be taken seriously. That is, I was trying to be an objective biologist which is what scientists are supposed to do. What we are trained to do in fact. There were also several situations (probably more actually) where similar points, but about gender instead of race, met with pretty much the same result. Though it didn’t end up mattering very much, I was rejected from one graduate school because the chairman of the department found out I had a conversation with another professor about the bell curve (that professor actually brought the topic up!). That chairman then projected onto me an argument he had with his daughter’s teacher where apparently the teacher said or believed something sexist. The bell curve only briefly talks about gender differences (a couple pages out of 849)…  What the teacher actually did was never very clearly explained. This guy was mad, and it had absolutely nothing to do with anything I said to him, and I got a nice rejection because of it. So ya, I got really pissed, and not for the first or last time. A string of situations just like this created a great resentment within me, which I am sure is quite true of many other people given the swelling of the red pill, the dark enlightenment and other internet phenomena. These prig prog “scientists” were being complete a**%^$!s about hypotheses which cover perfectly valid scientific questions, and which as I show in the book have a great deal of empirical support. If it hadn’t been for my naive faith in actual objectivity in science, and the subsequent confrontation with the progressive faith that actually exists in science that resulted, I almost certainly never would have cared enough to do any of this work. I may never have cared enough to find neoreaction. Yet those things did happen, and now neoreaction, the alt-right and the red pill have something available that they can use against left-wing creationists, should they desire to use it.

Confrontations like these have made me, and many others, heavily motivated to discredit feminism because their beliefs don’t match the facts and they witch hunt anyone and everyone who points that out. The best way to do that is with hard data and if I didn’t do it, I feared nothing else so comprehensive would have come out for years. Or if it did, it would be hidden in esoteric academic texts in obscure journals and even then it would be dressed in evasive and overly-qualified language. In fact, I would argue that there has been more than enough data available to discredit feminism for a very long time but paywalls for publicly funded research (don’t get me started on that) and a wide dispersion of everything relevant with substantial credibility has made it difficult to pull everything together. There are many, many papers which touch on the subject but none that I have been able to locate that brings it all together. And they definitely don’t come close to calling out progressives. Most try to appease the leftist mobs. To do this right takes an outsider, and it takes someone with an audience. I have a marginal audience, but the biggest help with spreading the information lies with my ties to the other neoreactionaries who have a much larger following. Likely, it will spread to the manosphere blogs due to the porous nature of the divide between neoreaction and that community. Or not, only time will tell.

Blog vs. book

There are a number of bloggers who write for years then decide after the fact to convert their posts into a book. In my case, I actually went the other direction. I had already had this book in progress for several years prior to starting this blog in 2014. A number of posts on this blog (not all) were either direct offshoots from work on this project or were indirectly inspired by my work on the book and later integrated as they were highly relevant to points I was making. Some changed little, while others changed significantly in the move. For the most part, my posts are shortened versions of what appears in the book and have less evidence, citations, and topics as a result of needing to make them stand alone away from the rest of the text. However, the most important part of the book, in my mind, is the large numbers of studies collected together from a wide variety of fields and which constitute the evidence for the biological origins of sexual dimorphism in intelligence. This includes both IQ test studies and the impact of the genetics and hormones on the brain and intelligence. This evidence is exclusive to the book. If you would like a taste of the content of the book before deciding whether or not you want it, I recommend you take a look at the following posts:

Career women are dysgenic

How standardized testing undervalues men

stereotype threat and pseudo-scientists.

Share Button

Rigging Academic Articles to be more Progressive

I have previously discussed how articles are altered such that the conclusions appear progressive even though the data says anything but. My article on wikipedia in action is all about this, and my upcoming book Smart and Sexy, which will be published by Arktos, also discusses this with respect to intelligence testing and brain size measurements among many other things. The red pill subreddit recently had a confession of such manipulation by a firm which does team building training (archive in case first link gets lost). Though the source is ultimately 8chan, I have seen enough of this stuff elsewhere that I think that it is very plausible that this person is real and being truthful. The short of it is that males very clearly did better than females in an organized task requiring spontaneous coordination. The order of performance went all male–>mixed gender–>all female. Since that doesn’t work for pushing the narrative, nonsense factors were made to appear to be the most important so that it looked like the mixed teams did best. However, the data is still there and unchanged for those who pick at it and they will be able to see the male teams did have better performance. This is exactly what happened in the research paper I looked at with respect to racial relatedness in the wikipedia in action article. Though the writing seems to say people of different races can be more related than people from the same race, the data says the exact opposite. So, here too we will see another example whenever this “scientific” article is actually released. Keep an eye out for it because there is more than enough detail for us to look at their exercise description then trace it directly back to this confession. Having this in hand would be absolutely delicious.

Below, is the text of the original confession:

Alright /pol/, here is something to reinforce your opinions on women working in teams.

I am working as a team building coach in Germany. I hold courses for a company were teams are being tested and need to work together to fulfil their tasks. The goal is to have a better working team afterwards and to address problems within the team. Now before I get startet none of this is scientific. We use certain tests that need certain skills and are measured by certain factors, such as time needed, number of steps, etc. We record everything but it is not really a scientific test environment(no control groups, no randomization etc.)

To describe one particular exercise:

In a group of (usually) 16 people everyone gets blindfolded and gets an object. 4 people get the very same object. Now it is up to the people themselves to find the other 3 guys with the same object to form a group of 4 people and advance to the next excercise.

Now, the object is basically two dimensional and the key to finding your group is to count the edges. You cant see, but you can feel how many edges your object has. The perfect way would be to put a finger on one edge and then start counting the edges with your other hand until you know the number.

You can either tell everyone your method so time is not wasted(indicator of strong leadership skill) or you try to locate someone else, ask him for his number of edges and so on(poor leadership, no systematic working, you get the idea).

On saturday last week I had to finish a presentation(lll get back to that later, its the reason I post it here on /pol/) that was requested by a study group of the BMBF, the “Bundesministerium für Familie und Forschung”, Ministry of Family and Science here in Germany). We keep track of the performance of every team and have access to quite an amount of data. The exercise described has been done 356 times and I want to talk a little about the results.

All female teams did absolutely terrible. There are only very few instances in which the women figured out to count the edges and utilized the method to achive success, let alone figured out that someone should take the lead. Even with strong female lead a lot of women were unable to figure out how to count the edges without losing count. They were just starting to count the edges without indicating where they started. There were 2 reports of women claiming to have objects with more than 20 edges while the physical maximum is nine.

There is almost no difference between all female teams and female teams with strong female leadership. Strong female leadership does increase performance but only if detailed instructions are given by the female leader. It is necessary to describe the process step by step. The best performing all female team with strong female leadership did the following:

  1. Female leader commands everyone to be quiet several times while female are already discussing subjects not related to tasks.
  2. Female leader achieves silence, explains that you have to count the edges. She also explains the method.
  3. Female leader asks everyone to find other group members with the same number of edges.
  4. Chaos ensues. Female leader tries to get everyone to be quiet again.
  5. Female leader achieves silence and commands all with 7 edges to move towards her voice.
  6. Female leader appoints a sub leader for another number, asks group member to move towards the voice of the sub leader. Repeats the process several times until all groups are established.

Yet they are still the performing worse than mixed teams with male leader ship and a lot of mixed teams with poor male leadership. This is in stark contrast to an all male team with strong male leadership.

  1. Male leader demands silence right alter the tasks starts. There is no discussion, no period of figuring out who the leader is.
  2. Male leader says everyone should count the edges. There is no explanation of the method, yet there is no documented case in which a males failed to get the right number of edges.
  3. Male leader commands all 43 to move toward his voice, verbally appoints sub leaders for other groups while the other still move.
  4. Subleaders start to command their numbers to come close to their voice, it gets a little louder since 4 people are saying their number constantly.
  5. Groups are established.

This was the fastest documented case. Male teams with no strong leadership came in second. Someone usually yelled the method, everyone else copied it and then everyone just yelled his number until all groups were established. Mixed teams with (strong or poor) male leadership came in third, Mixed teams with strong female leadership didnt exist, it was always a male taking the lead or figuring out the method first, others copied it. Mixed teams with no leadership didnt exist either. Female teams with strong female leadership came in fourth and Female teams with no or poor leadership came in 5th by a long margin.

Now the problem lies within the results itself. They are considered sexist and discriminatory. It is not what the study group wants to hear, alter all it is for our super progressive government that sees women as superior to men and mixed teams as an ideal, which is why I was asked by my boss to make it look like mixed teams performed the best. I didnt want to fix the numbers, l just had to come up with something that made avarage results look good. So the number one indicator that determines whether it was a success or not is not the time needed, the efficency of the method or another metric. It is harmony within the group. display of natural leadership meaning no one forced someone else to listen to his opinion. Strong male leadership tended you yell out commands that addressed everybody and demanded certain actions while leadership in mixed teams usually asked politely. I also turned letting your fellow group members figure out the solution themselves, giving them time into a plus. Oh yeah, and creativity of solution, sehr wichtig.

Average became the new greatness. Mixed teams and female teams had top scores on all these feel good items, performance was ignored. lm about to hold this presentation later this week and hand over all the data. I am excited what they cook up with it but left a stinky trip mine in there. The numbers have not been changed and if they use this for any paper or recommendation in their proposals for new policies the compromising data is still in there.

So if you see someone claiming bullshit of women being superior or some shit you should take a closer look at the numbers. What was measured, how it was measured etc. lm pretty sure I am not the onyl one who riggs his data in a way that it looks better for the intended purpose.

Share Button

A proposal: Social Matter for the sciences.

A user on reddit posted a link in which he lamented that there is not a neoreactionary magazine devoted specifically to science and technology news. Frankly, I think this is a very good idea. I have published things related to this several times before. You can see two of these articles here and here.

My book on gender differences in intelligence, and the biological basis thereof, is actually finished and loosely qualifies for what he wants. So that is science, which is just science, coupled with neoreactionary interpretations. I am still negotiating with a potential publisher, otherwise it would already be available. I hope to have it out by the end of the year, but this process takes a long time I understand. If that doesn’t work out, I will put it out on amazon instead. As important as I feel that is, it isn’t a science magazine which has regularly published short articles. It would be quite beneficial to start such an institution.

If there was going to be a neoreactionary science magazine I think that it would mostly be the regular critiquing of various published articles and pointing out the liberal bias in them. You can see a good template in Steve McIntyres Climate Audit website. Imagine this, but with broader topics and an explicitly neoreactionary position. Academia is a left dominated institution after all and I don’t see reactionary scientists getting funding or being published anytime soon. So really that is all we could do with the occasional exception. Before I started my current blog, I considered doing just this kind of content. Specifically, I had in mind a blog which took a published article from psychology every week or two and went through it to find bad or missing interpretations of the findings. Social psychology especially provides a wealth of material to be ripped apart by critique, but plenty of other branches do as well. Basically, what I have found in reading these articles is that often the data collection and number crunching is about as decent as could be expected, but the interpretations of the findings are often just way off; or certain conclusions are conspicuously absent. We don’t necessarily have to analyze papers as a statistician would to critique these papers. In fact, we can often just assume that all the data and math was done superbly (even if that probably isn’t true) and still find major problems with the paper. By conceding that part and focusing on interpretations it should make it so many more of us could participate in writing content for this magazine. Of course, we could also include pieces which simply analyses new advances in technology as well. I believe I could make a commitment to creating content at least once every two weeks and maybe once a week when time permits. Would anyone else in the neoreactosphere be willing to start working on this sort of thing with me such that we have something similar to social matter but for science and technology specifically? Please email me at Atavisionary@gmail.com or comment on this blog so we can start to make a plan.

Also, pretty much any paper should be freely accessible if we use libgen.in and /r/scholar to get them, so no one should go out of pocket to get papers for thrashing.

Share Button

Wikipedia in Action on Race

I like to refer to Lewontin’s fallacy frequently when debating people who deny the biological basis of race. Wikipedia, while clearly not perfect, did have a reasonable article (at least for quick referral of lay-people) on the paper written by W.F. Edwards which coined “Lewontin’s fallacy.”(1) A brief overview is that in the 1970’s an academic social justice advocate published a paper(2) in which he claimed that there is more variation within individuals from one race than there is variation between different racial populations. So much that you can regularly find people of different races who are more similar to each other than they are to members of their own race. However, the first paper linked to above shows that the problem mainly stems from the fact that very few loci were studied by Lewontin. Allele frequencies differ between populations and with enough loci studied, the ability to distinguish between racial groups based purely on genetic information is quite high. Virtually 100%.

As is typical for pretty much all articles on Wikipedia, anything that isn’t politically correct can be expected to drift over time such that claims that are not PC are deleted, diluted, and placed next to a larger number of criticisms than is warranted such that it implies that the non-PC claims seem unsupported or only supported by very few outliers. Sometimes, like in this article, a paper which can be seen to support one conclusion actually supports the opposite on more careful inspection. All of this is the wikipedia version of death by 1000 cuts. I once tried editing the page on gender differences in intelligence and was basically run out and banned by marxist feminists. I assume this happens to anyone who objectively tries to include factual and balanced information into potentially politically incorrect articles. These same people got that article deleted or subsumed into gender differences in psychology for awhile, but it looks like it has been resurrected now. Honestly, the constant battle over these sorts of articles is just beyond all reason and I will never bother editing wikipedia again. Chances are your work is just going to get deleted and there are other platforms where that won’t happen.

Subjectively, it seems like this sort of thing has been happening to the Lewontin’s fallacy article, but I will let you be the judge:

Here is an old archived version of this article.

Here is an archived version of the current article.

Here is a direct link to the article. (It shouldn’t look different than the above link at the time of this post, but who knows what future changes will be made. In a year or two it could be interesting to compare these three versions)

The thing that is most obvious in my mind is that a paper discussed in an earlier version of the article which supported the concept of Lewontin’s fallacy has had any reference to it completely deleted. Here is the now deleted content:

Studies of human genetic clustering have shown that people can be accurately classified into racial groups using correlations between alleles from multiple loci. For instance, a 2001 paper by Wilson et al. reported that an analysis of 39 microsatellite loci divided their sample of 354 individuals into four natural clusters, which broadly correspond to four geographical areas (Western Eurasia, Sub-Saharan Africa, China, and New Guinea)

In addition, a paper which purports to undermine the concept that Lewontin’s thinking is fallacious is present at the end in both versions, but is quoted more (and very selectively) in the most recent version. In my opinion, the findings in both wikipedia versions are misrepresented.

In the old article this:

The paper claims that this masks a great deal of genetic similarity between individuals belonging to different clusters. Or in other words, two individuals from different clusters can be more similar to each other than to a member of their own cluster, while still both being more similar to the typical genotype of their own cluster than to the typical genotype of a different cluster. When differences between individual pairs of people are tested, Witherspoon et al. found that the answer to the question “How often is a pair of individuals from one population genetically more dissimilar than two individuals chosen from two different populations?” is not adequately addressed by multi locus clustering analyses. They found that even for just three population groups separated by large geographic ranges (European, African and East Asian) the inclusion of many thousands of loci is required before the answer can become “never”

On the other hand, the accurate classification of the global population must include more closely related and admixed populations, which will increase this above zero, so they state “In a similar vein, Romualdi et al. (2002) and Serre and Paabo (2004) have suggested that highly accurate classification of individuals from continuously sampled (and therefore closely related) populations may be impossible”. Witherspoon et al. conclude “The fact that, given enough genetic data, individuals can be correctly assigned to their populations of origin is compatible with the observation that most human genetic variation is found within populations, not between them. It is also compatible with our finding that, even when the most distinct populations are considered and hundreds of loci are used, individuals are frequently more similar to members of other populations than to members of their own population”

expanded into this:

In the 2007 paper “Genetic Similarities Within and Between Human Populations”,[20] Witherspoon et al. attempt to answer the question, “How often is a pair of individuals from one population genetically more dissimilar than two individuals chosen from two different populations?”. The answer depends on the number of polymorphisms used to define that dissimilarity, and the populations being compared. When they analysed three geographically distinct populations (European, African and East Asian) and measured genetic similarity over many thousands of loci, the answer to their question was “never”. However, measuring similarity using smaller numbers of loci yielded substantial overlap between these populations. Rates of between-population similarity also increased when geographically intermediate and admixed populations were included in the analysis

Witherspoon et al. conclude that, “Since an individual’s geographic ancestry can often be inferred from his or her genetic makeup, knowledge of one’s population of origin should allow some inferences about individual genotypes. To the extent that phenotypically important genetic variation resembles the variation studied here, we may extrapolate from genotypic to phenotypic patterns. […] However, the typical frequencies of alleles responsible for common complex diseases remain unknown. The fact that, given enough genetic data, individuals can be correctly assigned to their populations of origin is compatible with the observation that most human genetic variation is found within populations, not between them. It is also compatible with our finding that, even when the most distinct populations are considered and hundreds of loci are used, individuals are frequently more similar to members of other populations than to members of their own population. Thus, caution should be used when using geographic or genetic ancestry to make inferences about individual phenotypes”,[20] and warn that, “A final complication arises when racial classifications are used as proxies for geographic ancestry. Although many concepts of race are correlated with geographic ancestry, the two are not interchangeable, and relying on racial classifications will reduce predictive power still further.”

This paper… It had decent data and methodology actually. But as is almost always the case with these sorts of things, interpretations and framing of the results are key. It is clear that the people who wrote this are deliberately softballing their wording either to cover their ass (my guess) or to promote a more progressive narrative.

ω in the following quotes is defined as given a certain number of loci considered, the probability of individuals originating from two distinct geographical areas will be more similar to each other than to someone originating closer to them. I.E., the probability that two randomly selected individuals from different races will be more similar to each other than each is similar to a randomly selected member of their own race. Keep in mind that ω is not the same as determining what race a person is based on genetic data. Even with small numbers of loci and a high ω, there is very low probability of misclassifying the race of an individual person. From the very same paper used to undermine the Edwards’ paper:

[A relatively large ω is found with low numbers of loci] It breaks down, however, with data sets comprising thousands of loci genotyped in geographically distinct populations: In such cases, ω becomes zero.

With the large and diverse data sets now available, we have been able to evaluate these contrasts quantitatively. Even the pairwise relatedness measure, ω, can show clear distinctions between populations if enough polymorphic loci are used. Observations of high ω and low classification errors are the norm with intermediate numbers of loci (up to several hundred)

Thus the answer to the question “How often is a pair of individuals from one population genetically more dissimilar than two individuals chosen from two different populations?” depends on the number of polymorphisms used to define that dissimilarity and the populations being compared. The answer, ω, can be read from Figure 2. Given 10 loci, three distinct populations, and the full spectrum of polymorphisms (Figure 2E), the answer is ω ≅ 0.3, or nearly one-third of the time. With 100 loci, the answer is ∼20% of the time and even using 1000 loci, ω ≅ 10%. However, if genetic similarity is measured over many thousands of loci, the answer becomes “never” when individuals are sampled from geographically separated populations.

Molecular biologists and geneticists use a little bit different definition of polymorphism than some other branches in biology. In this case, they are referring to single nucleotide differences in the genome. This is equivalent to having one letter different in spelling a word. Prog and prig mean almost the same thing, but there is one letter difference which slightly changes the meaning. This is a reasonable analogy to the differences in the genetic code.

What this paper says (and it should be said with less tip-toeing) is that if you only consider a small number of these single nucleotide polymorphisms, there is a high degree of error and you can often erroneously conclude that two people from different races are more similar to each other than they are to individuals of their own race. The key word here is erroneously. This is a statistical problem, not biological fact. If you consider thousands of SNPS at once, then you have virtually no chance of encountering this problem. The authors of this paper found that Edwards was right and Lewontin was wrong. Individuals from two different races are never more similarly related than people from the same race, and the genetics supports this when you consider enough loci. It is pretty unambiguous. The quotes in the Wikipedia article and in the paper don’t really represent what the researchers actually found. The researchers had to dress this language up the way they did because of progressive influence in academia. Chances are they wouldn’t have gotten published if they were straight forward about what they found, and even if they could have published political heresy they may have had their careers ruined by SJWs in academia. See what happens when you don’t toe the line with the progressive narrative by reading what happened to a University of Texas researcher who didn’t find the “right” conclusions with regards to gay couples raising children. Though there is a huge problem with how Wikipedia articles are written and “maintained,” they wouldn’t have been able to misconstrue these results so badly if it weren’t from the same sorts of SJWs in academia malevolently influencing researchers. Though it shouldn’t be understated that the wikipedia editors did in fact selectively quote from this already bludgeoned paper. Two layers of SJW influence changed the findings of this paper to mean the exact opposite of what it actually found. Unbelievable. It is truly amazing that this sort of shenanigans is allowed to go on.

You might object that “thousands” is a huge number and that this demonstration of statistical problems convincingly shows that races don’t differ if it takes that many to reduce error to zero. However, the human genome is about 3 billion base pairs long. If you were to use 3000 base pair SNPs, which is consistent with the minimum in the paper, then you need to utilize only .0001% of the whole genome to reduce this error to zero. Or, if you want to consider SNPs only, there are about 10 million SNPs in the human genome. A sample of 3000 SNPs is only .003% of the total number of SNPs that could be used. This is a conservative estimate because their figure 2 indicates it only takes about 1000 SNPS to minimize this error. In other words, it only takes a vanishingly small fraction of the genome to relieve you of this statistical error that can find that humans from two different races are more similar to each other than either is to their own race.

Yet this paper, which so conclusively shows that human races are different from each other on the genetic level, is used to debunk the original Edwards’ paper. The author’s of the paper attempt to debunk themselves or at least pretend like they found the opposite of what they actually did. This paper is absolutely one of the worst instances of doublethink I have ever come across. It literally blows my mind. As a society, we seem to have a real hatred for truth when it comes to biological realities and the uninformed are clearly being purposefully told lies.

Sidenote: I know there was another article on cathedral entryism on Wikipedia in the alt-right in the last year or so, but for the life of me I can’t find it. If anyone can provide a link I would appreciate it. Edit: Found it.

(1) Bioessays. 2003 Aug;25(8):798-801. Human genetic diversity: Lewontin’s fallacy. Edwards

(2) The Apportionment of Human Diversity. R. C. Lewontin. 1972

(3) Genetics. 2007 May; 176(1): 351–359. doi:  10.1534/genetics.106.067355 PMCID: PMC1893020 Genetic Similarities Within and Between Human Populations J. Witherspoon, S. Wooding, A. R. Rogers, E. E. Marchani, W. S. Watkins, M. A. Batzer, and L. B. Jorde

Share Button