www.nettime.org
Nettime mailing list archives

Re: <nettime> some more nuanced thoughts on publishing, editing, reading
J Armitage on Sat, 30 Jul 2011 14:02:22 +0200 (CEST)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: <nettime> some more nuanced thoughts on publishing, editing, reading, using


Hi all

Fascinating discussion here, especially when one is speaking as both an
academic and a journal editor (Cultural Politics). A couple of points
that don't seem to have been picked up, unless I have missed them in the
flurry of emails on this topic:

. First, there is massive under appreciation so far here of the sheer
level of pressure people in UK universities at any rate are under right
now, as they await the dreaded Research Excellence Framework (REF) in
2014, a system that, as I am sure many here on nettime already know,
expects 4 outputs from 'research active' scholars. This means that
people are expected to, as the old saying goes, 'publish or perish'
(more on the perish part below). At the institutional level, this means
that those who are loaded down with teaching and not 'research active'
are more or less frantic as they expect to be 'managed out' if they
don't publish something, anything, in the next year or so. This is also
leading to a massive spike at journals in the number of low quality
submissions (there were enough before) - I am on quite a few ed boards
and I see it everywhere now - not because people expect to be published
but because they can tell their manager they have submitted something to
a journal. In fact, in complete contrast to earlier times, the longer
the journal does _not_ make a decision on the paper the better it is
since the author can rest easy to some extent that they have submitted a
paper to a journal and that gets the manager off their back for a while
at least. See the story below about the entire collapse of the
Australian 'excellence framework' in June this year. The killer point is
this:

"There is clear and consistent evidence that the rankings were being
deployed inappropriately within some quarters of the sector, in ways
that could produce harmful outcomes, and based on a poor understanding
of the actual role of the rankings. One common example was the setting
of targets for publication in A and A* journals by institutional
research managers."

In other words, the very thing that most academics came into
universities for - the delight of research - has been turned into a
managerial cosh.

.A second point, and which has a very important bearing on the
discussion here on nettime, is that no one to my knowledge has noted
that open access, net journals, etc. are effectively outlawed in the UK
REF system at least. No one I know takes seriously anything published on
the net as a contender for the REF. This is not to say that publishing
on the net is problematic in reality, obviously. However, in the
alternate reality that is the REF in the UK, almost everything that is
published in a net journal is simply sneered at by the powers that be as
mere fluff because it has not been published in a 'proper' academic
journal (i.e. one published by a 'reputable' publisher with paper
covers). This sort of thing may well be out of the ark for many nettime
readers but the actual result on the ground is that many academic staff
in the UK cannot 'afford to' publish in net journals because, unless
they have their precious 4 REF outputs already, any text they submit to
CTHEORY or wherever is already not worth the paper it is (not) written
on. So, many UK academics are effectively forced to decline any
invitations to journals or, if they accept them, know full well that
whatever they write cannot be included in the REF.

Paraphrasing William Gibson somewhere: the future of open net publishing
may well already be here but its certainly unevenly distributed.

Best to all.

John
#####################
http://theconversation.edu.au/journal-rankings-ditched-the-experts-respo
nd-1598

The Australian government has dropped the contentious system of ranking
academic journals and assessing academics based on their ability to
publish in the top-ranked publications.

Previously, journals were ranked either A*, A, B or C.

The decision was announced as part of a review of the way the next
Excellence of Research in Australia (ERA) exercise would be conducted by
the Australian Research Council (ARC).

The ERA is the method by which academic units are assessed and helps
informs which research projects receive funding.

Here is a range of expert views on the changes:

Kim Carr, the Minister for Innovation, Industry, Science and Research
(in a statement to the Senate Economics Legislation Committee)

I have approved a set of enhancements recommended by the ARC that deal
substantially with those sector concerns while maintaining the rigour
and comparability of the ERA exercise. These improvements are:

    The refinement of the journal quality indicator to remove the
prescriptive A*, A, B and C ranks;
    The introduction of a journal quality profile, showing the most
frequently published journals for each unit of evaluation;
    Increased capacity to accommodate multi-disciplinary research to
allow articles with significant content from a given discipline to be
assigned to that discipline, regardless of where it is published (this
method was successfully trialed in ERA 2010 within Mathematical
Sciences);
    Alignment across the board of the low volume threshold to 50 outputs
(bringing peer-reviewed disciplines in line with citation disciplines,
up from 30 outputs);
    The relaxation of rules on the attribution of patents, plant
breeders' rights and registered design, to allow those granted to
eligible researchers to also be submitted; and
    The modification of fractional staff eligibility requirements to 0.4
FTE (up from 0.1 FTE), while maintaining the right to submit for staff
below this threshold where affiliation is shown, through use of a
by-line, for instance).

I have also asked the ARC to continue investigating strategies to
strengthen the peer review process, including improved methods of
sampling and review assignment.

There is clear and consistent evidence that the rankings were being
deployed inappropriately within some quarters of the sector, in ways
that could produce harmful outcomes, and based on a poor understanding
of the actual role of the rankings. One common example was the setting
of targets for publication in A and A* journals by institutional
research managers.

In light of these two factors - that ERA could work perfectly well
without the rankings, and that their existence was focussing
ill-informed, undesirable behaviour in the management of research - I
have made the decision to remove the rankings, based on the ARC's expert
advice.

The journals lists will still be of great utility and importance, but
the removal of the ranks and the provision of the publication profile
will ensure they will be used descriptively rather than prescriptively.

These reforms will strengthen the role of the ERA Research Evaluation
Committee (REC) members in using their own, discipline-specific
expertise to make judgments about the journal publication patterns for
each unit of evaluation.

#####
Professor Les Field, Deputy Vice-Chancellor (Research), University of
NSW, Chair of the Deputy Vice-Chacellors Group of the Go8 Universities

In the past, the ARC have published a ranked list of journals (about
30,000 entries) where Australian researchers publish their work. The ARC
have decided to scrap the journal rankings - there will be no more A*,
A, B or C rankings for journals.

The journal rankings have been one of the most contentious parts of the
ERA.

The ARC have decided that they will not be the custodians of the
official list of ranked journals. There was probably too much angst, too
much pressure and an enormous responsibility to maintain the list. The
lobbying around the journal rankings has been very strong.

This will be welcomed in the sector. The journal rankings have (not
unexpectedly) also driven some very perverse behaviour within
institutions, including:

(i) It was providing incentives to publish in the listed journals rather
than in the most appropriate outlets for the disciplines.

(ii) There was pressure to move away from publishing books or in books
and towards the listed journals since this is what was being measured
and captured by the ERA.

The ARC have opted for more responsibility for their expert review
panels. The members of the panels will have the discretion, wisdom and
judgement to determine what are the strong, medium or weak research
outputs appropriate for the discipline.

There will be concern in some parts of the sector that this moves some
of the quality "criteria" for the ERA behind the closed doors of the
panels. It places a lot more discretion and responsibility on the panel
chairs and the panel members and removes one level of transparency from
the process.

This move will actually put a lot more responsibility on the panels. The
challenge for the ARC will be to ensure that the panels are
well-constituted and to build the confidence of the sector in the panels
and their ability to make good judgments in their respective
disciplines.

The panels will probably have to expand to ensure that they have
expertise and coverage of the disciplines at a finer granularity.

It is not obvious whether there will be feedback or a report or guidance
from the panels to indicate what they considered the strongest or
weakest of the research outlets in their disciplines.

For all of the criticism levelled at the ERA, the journal rankings did
introduce a new level of awareness of the "quality" of research that has
been conducted and published in Australia. Until the ERA raised its
head, most Australian research metrics simply picked up the volume or
quantity of research output. Researchers are now much more conscious of
the need to focus on quality (rather than simply volume or quantity) and
directly and indirectly, this has shifted the mindset of Australian
researchers.

The minister also announced that the minimum (full time equivalent) FTE
for staff to be included in the ERA has now been defined as 0.4. That
means that a staff member must work at least 2 days a week at an
institution for their research output to be counted.

This is a necessary change to prevent some gaming of the system, however
it will probably be controversial since we know that there are
fractional appointees who do make a very real contribution to our
research effort.

Setting this minimum threshold will mean that we really are counting the
staff who are seriously committed to the institution.

http://theconversation.edu.au/journal-rankings-ditched-the-experts-respo
nd-1598


#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime {AT} kein.org