Clarivate Continued Developments within the Web of Science

Title slide
Aug 24, 2021

A vendor presentation from Clarivate on the New Web of Science interface with a refreshed look, enhanced workflow tools, and faster page load.

Transcription (select to toggle opened/closed)

Michelle Fleetwood I just wanted to put our faces up here too, as we introduce ourselves, so as mentioned, my name is Michelle Fleetwood. I'm a solutions consultant here at Clarivate, joined today by Michael Bragg, manager of the North American Academic Market, and Don Sechler, who's the product manager of the Web of Science. I also put Pam Nila here on the side, whose new account manager here at Clarivate, and I'm sure you'll get to know her within the coming months. But thank you so much, everyone, for joining us today to speak about the Web of Science and how it can support your institution, your library, as well as your patrons. So faculty, staff and students that are doing research at your institution. So just a little background on the Web of Science to start us off. So the Web of Science is, of course, a trusted publisher, independent citation database allowing institutions to confidently discover the most influential research and begin to assess their contributions and collaborations within the scholarly landscape. And of course, the research ecosystem seems to be expanding exponentially in terms of publications. Institutions are presented with many challenges, such as finding the right collaborators or securing the necessary funding and revenue to even deciding where to publish findings that you have the greatest impact. And with each of these pain points and challenges, we really aim to be a helpful resource by providing workflow solutions and tools, such as the Web of Science, which of course is a multidisciplinary platform that connects regional specialty data and patent indexes to our Web of Science core collection content through citations. And the Web of Science, and the citation network that it creates, really allows you to track ideas and innovations across research disciplines and across time. And this is done from over two billion cited references from over 182 million records all the way back to 1900. And at the heart of our network is the Web of Science Core Collection, which is a searchable database of citations to literature. There are over 21,000 journals within the core collection in addition to conference proceedings and books, and the core collection content is really unique and selective, and our publisher neutral editorial process really ensures for journal qualities so that you can trust the records that you discover within the platform. The Web of Science also includes specialty hosted content such as Medline. We have our Regional Citation Indexes, with our newest one added late last year being the Arabic Citation Index. We have our specialty subject related collections and our specialty content, as well as the Innovations Index, as well as our Data Citation Index. And the journals that our editorial team selects for inclusion within the Web of Science Core Collection are then indexed and their metadata recorded and curated to really deliver a rich platform for discovery and analysis. And shown here are some of the things that we capture when we include a journal within the core collection. So we capture, you know, all of the citation counts. So we capture the complete bibliography for every paper that goes in the core collection, which means for every paper you'll find time cited count, you can see how many paper cited it and then link out to those citing papers, which really allows you to track ideas and research trends over time. We also capture complete author information, so we capture the names and institutional affiliations for every author on every paper from these journals, even if there are thousands of people which have collaborated on that particular article, and then using this data, you can quickly identify who the leading global experts are in a particular research area or even at your institution. And with this complete author information, again, we're capturing their affiliations as well. And we've done extensive work to unify institutional names within the data for over 14,000 academic government and corporate research producers worldwide. And this type of curation that we do really makes it easy to understand collaborations between your university and others as well as with government and industry. And additionally, we do capture funding information and this is back to 2008 and we continue to expand upon the funding information that you see within the Web of Science. So in an upcoming slide, you'll see exactly how we're doing this within full records. So I wanted to point out a little bit of the data cleansing and unification work that we do within the Web of Science for our core collection records, our teams do perform significant unification to ensure we do have accurate and comprehensive data. So for organization unification, we take those address variants listed by authors and then map them to institutions, and we've done this for over 14,000 organizations. Most recently, we have been unifying publisher name variants to a single parent publisher name, and we've now done that with over 12,000 publishers. And then we also do this for funding organizations in which we've pulled information from authors' acknowledgments within a record and unify this to a funder name. And again, we've been capturing, capturing this funding information since 2008 forward. And right now we have over 1200 unified funding agencies. And all of this data cleansing and unification really just allows you to easily analyze each of these pieces of data, your organization, who you're publishing with, who's funding work at your institution a lot more easily within the platform. And another piece of kind of curation work that we do is in regards to open access content. So at the Web of Science, we do have a publisher neutral view of open access and allow users to discover and link out to trusted, peer reviewed open access content. So early on, we provided a grant to the nonprofit OurResearch to improve their open access detection technology, which is called Unpaywall, and we provided them this grant to improve this technology, not just for Web of Science users for but for the larger community as well. And this partnership continues to evolve. And we're now identifying more articles with associated open access versions by now, including green submitted versions of manuscripts which have been self archived in a repository. And by adding this type of open access version, you can begin to understand what changes may have occurred during the reviewing and publication process for a particular article. So that's one of the new features of open access that we have on the platform. In addition, we are no longer using DOHE to identify open access articles and some of our naming conventions have changed due to this. If you'd like to better understand kind of the nomenclature changes that we've had in regards to open access, I have included a link on this slide that you can explore after our session. So that's kind of what we've done in the background with the data. I kind of want to move on now to the new Web of Science. So over the last year, our teams have built a completely new version of the Web of Science, which is now the primary version that you'll see when you go into the the platform. And in the development, some of the key features that we were focused on early on are shown here. So one of them being accessibility, so really committed to ensuring that the Web of Science remains accessible for all users. And we're also focused on speed. So the new version is fast. It's more responsive. So you'll see that pages load a lot more quickly, which is nice. And we do have an improved design. And so the workflows, the look of the system has also been improved, which you'll see as you go into the new platform. So as you kind of go into the new Web of Science and in our development, we've really been focused also on the researcher and their experience when they come into the platform. And I'll go into depth in some of these key enhancements in the next few slides. But just wanted to highlight high level of what's changed within the new interface that has been built over the last year or so. And one of the things that I want to highlight is that we've been super responsive to feedback that we receive from Web of Science users and been really responsive to their feedback. So an example is exporting options and different workflows. So when you're in the Web of Science, you can suggest a feature or an enhancement. This will then go directly to our product team for evaluation. And this in-product feedback has already informed our development and the teams have prioritized features like different export options due to customers, you know, suggesting that feature be added. And alongside the enhanced customer feedback that we're providing, we also have an integrated tool within the interface to provide contextual help, tools, walkthroughs and links to customer support. So kind of meeting the needs of the researcher once they're in the platform. So just want to go through some of the most important changes to the platform now. So first, you know, we've built this completely, this new Web of Science interface, but we've also been working on improving some of the existing functionality that was part of the classic Web of Science. And one of those is the author search functionality, which helps you or your researchers find all of the papers that someone has authored within the core collection. It also allows researchers to then curate this set of publications to make it as accurate as possible within the platform. Not only does this help others discover other researchers, but allows you to view your output and your impact all in one spot. But this author search tool is based on an algorithm which clusters papers together based on an author's name as they're- and their affiliations. And you can search by an author's name as well as an identifying number like the researcher ID or their orchid number. And then based on the algorithms, these author searches lead to author records and this algorithm learns from feedback, so it uses artificial intelligence to learn from user feedback, as well as feedback from registered Web of Science users so you yourself can go into the Web of Science and correct and author record if you'd like to. The sort of feedback is then manually reviewed by a team here at Clarivate before it hits the system with any changes. But from these author records, you can see an author's affiliation, all of their publications in the Web of Science Core collection, along with some metrics to to better understand their influence and impact. And one recent announcement that we've made to these author records is regarding that impact piece. So as research evaluation of researchers is evolving, we really aim to support the community in practicing more responsible evaluations. And we're really excited. We were really excited to introduce this a few months ago. It's called The Author Impact of Beamplot and this is a new way to understand the citation impact of an author's research. So within an author record, you'll see individuals beamplot, which shows the percentile rank of each paper that a researcher has authored. These -- that are associated to their publications are normalized based on the year of publication as well as the category that they're within. And this can help you better understand the citation counts that you see within the Web of Science. And I have included some links here to White Paper Guide and video to kind of walk you through this new visualization and help you understand how this metric is calculated and why they can help to to profile a researcher better than potentially a single number can. Another recent enhancement I want to highlight is the "you may also like" feature. This is our recommendation feature in the Web of Science, and this is for a given record. We'll show you a list of relevant records as recommendations. These recommendations are generated through another algorithm that we have that uses anonymized usage activity data in combination with article metadata to generate some recommendations for users. With a recent enhancement or update to the system, the majority of records in the core collection will now have these article recommendations. Additionally, you can now see these recommendations on the search results summary page in a new tab next to the publications tab called You May Also Like, which also provides a list of additional records that are most similar to the articles returned by a search query. And this is really to allow for a more enriched search experience for users once they're within the platform. In the same vein of practicing more responsible research evaluations, we have added additional contextual information to citations in the Web of Science. So although citation counts can be helpful when determining the usefulness of a record, they don't necessarily show the full picture of why something may be cited. So this additional citation context is really designed to capture the author's intent when they're citing references in the body of their article. So from here, you can see how many times a reference was cited within an article so you can start to see how impactful it may have been to that author. You can see what references are cited in proximity to each other so you can begin to see which references may be most related. And you can also see where in the body of the article it was cited so that you know why the reference may have been cited. And one of our most recent releases, we added a classification to each in-text mentioned to indicate why the author may have cited the reference in each instance. Within this enhanced citations, we do have this visualization in which a dot indicates each time a cited reference was cited. If you hover over the dot, this is interactive, you can see which reference it was you can click on then view in-text mentions to navigate down to the cited reference of interest and see which other references are also cited. And this enhancement, I will say, is still evolving. And more and more records within the Web of Science will be showing this enhanced citation information. We're also working on enhancing the funding information, as I alluded to earlier, that's visible within the Web of Science. And we've done this by adding additional data sources to our kind of funding streams. So we now have direct integration with grant details from the NIH, Fed Reporter, NSF, research fish, as well as KAKENHI, which is in Japan. And now that we have this direct feed of information from funding sources, we're able to add a lot more detail and metadata to funding information within a record. So you can begin to see grant title, the grant summary, the grant duration, principal investigators, you'll see the actual award amounts now, which is pretty exciting. So a lot of really cool additional data here. Throughout the next year, we will be targeting additional funders for inclusion, focusing more on on funders around the globe to to make sure that this enhancement is really benefiting the entire research community. And before I hand it over to my colleague, Don, I do want to mention that we have made a lot of new enhancements to the journal citation reports as well. So in addition to our annual update of the metrics that happened this summer, we've added some other nice enhancements here. So we have increased the content available by including two other citation indexes from the core collection, which have never been part of the JCR before. So that's our arts and humanities, as well as our emerging sources citation indexes. We've also created a new journal level metric, which is called the Journal Citation Indicator. This is a field normalized metric and allows you to easily compare journals that may be in different disciplines than one another. We've also included early access content and we've also completely redesigned the user interface. It's a lot more modern intuitive. It's also customizable to really meet your journal analysis needs on point. If you want to learn more about these enhancements specifically to the JCR, this slide does have some some links out to our blog series but you can also feel free to reach out to us afterwards as well if you have any questions on the new JCR. And with that, I'll hand it over to Don.


Don Sechler Cool, can you guys hear me OK? Yes. Great. So a couple of things that I kind of want to reinforce that Michelle covered earlier in the presentation, you know, the kind of the primary drivers for the release of a new Web of Sciencece platform in the- kind of what we had found, you know, the Web of Science have been out there very popular, but highly used for a number of years, and what we were kind of up against with the kind of the classic version of the Web of Science was we kind of extended the experience as far as we could using the the technology and how the interface had been built, and the product and designed back in 2007, when we kind of first released the latest version. And so in the intervening years, it's 2006/2007, we kind of conceived the design of the classic version. There just became new ways that you built and delivered Web based products, and we found that we could continue to develop on the existing platform but we were kind of running up against some kind of functional limitations. A lot of these were really related to our ability to deliver a kind of a accessibility guidelines for the interface. There a certain things that based on how the interface was built and certain decisions we made for our templates for how the Web of Science was designed, there was just blockers to achieving next meeting with the current accessibility guidelines. In fact, one of the targets we've had was actually meeting WCAG- AA guidelines, at least keeping this as a target. And we just kind of realized that there was just no way to kind of even get close to those guidelines under that current technology of the classic product. So that was kind of early on. We kind of had, you know, run up against those barriers. We realized that was going to take a redesign and basically a new way of delivering the content on the Internet, and that's really what kind of started off the building of the new Web of Science platform. The guiding philosophy of that was that the Web of Science is a successful product and largely reflected, you know, use cases and user workflows that had been determined and designed based on input from researchers and information professionals for a number of years. So there were not a lot of things that were kind of inherently broken. You know, it wasn't like users said, you know, this doesn't work. There were enhancements that have been requested. There were features that people thought could be made more intuitive or easier to use. But by and large, we wanted to keep the main functionality, the flexibility, the ability to be both have an easy and intuitive interface, but also allow for advanced searching and support some advanced use cases. So our guiding principle wasn't like we needed to kind of throw away everything that we had built initially and start again with completely reimagined product. It's just that most of the things that we built did serve their users were popular and were appreciated by users, the product, but we needed to make them a bit cleaner in design, design using new technology so we could actually deliver faster load times, make it accessible and things like that. So, you know, when you're moving to the new Web of Science, most of the things that you could do with the classic version are still there. They may have been redesigned. Some things may have been slightly moved to reflect kind of a new kind of navigation style. But most of the things that you had wanted to do, the the existing version are still there. However, during this process, a few things were kind of, you know, not there on day one. So there was an idea that we got out that 95 percent of the functionality that was the most popular functionality that supported the most traditional use cases, the most popular use cases. But there were some things that were not delivered for every product at the same time or every index at the same time. So there since our move to the new web of science in early July, we have been delivering new features, updating existing features and kind of migrating over kind of classic features to the new environment on biweekly releases, you know, since early July. So when we released in July, there are some things that we didn't have. Like there is a- the ability to analyze results or create a citation report from MARC List, there were some things with advanced search that were missing, there were some export styles that weren't available. So some things that we kind of know we didn't deliver on day one, we kind of released throughout the month of July. Some of these things are kind of part of the new functionality, which involves like how you work with save series/searches and actually share searches with other users. So we kind of spent some time kind of bringing that broadening out that delivery across the entire platform. In August, we can continue to continue to add new features, some of them are features that we were kind of catching up with and some of them were completely new. So we brought in BibTex as an export style; we've support in the past, but we had not brought it on day one at the of the migration. We added non unified affiliations back into the analyzed results. They've been there in the previous version at the back end, and then we're also very shortly, I think it's the next release for the product will be delivering an export that includes the full record and the cited references. So that full complement of all the data that was available, that was something we left out initially and are bringing back. And that that full record plus set of references is something that's very popular because that kind of does extend locally downloaded database to something that can be analyzed offline and looking at the citation connections offline. So that's something we're bringing back in. I think it's the next release, which I think is that second week of September at this point. And as we move into September, there's a lot of new features that would be bringing bringing over, you know, running a locally save search. A lot of users have searches from classic person to save locally as a locally held file, and we're going to bring back the ability to open those up and run them in the new version. There's one feature that we have been working on since the release, which is creating alerts from set combinations. So right now, alerting it's very functional, very easy to do with the product that can create alerts and a lot of different ways, but one of the things that we know we have not left behind is the actual ability to build up a multi-state search based on set combinations and then create a one click alert for that that kind of combined search. So that's something that we'll be bringing back in mid-September. So, again, there's going to be ongoing iteration around the platform. Right now we're hitting about a biweekly release schedule meeting where there's new features being released about every two weeks. Some of them are small things, kind of adding a new export style, refreshing a search aid from a specialized collection. And some of them are going to be kind of larger enhancements that reflect kind of long term plans to to deliver new features. Many of the new features that you will see do relate to are the work we've done over the past couple of years on the what we call the author record, which is our ability to kind of depict the career of a researcher based on their publications across the Web of Science, and to create that picture of a researcher, both for researchers who may have claimed records and said, yes, I wrote those papers and actually kind of physically interact with the data to to establish their own profile, but also for authorship that for researchers who have not engaged with the data. So to create algorithmically generated author records that kind of depict more accurately what a person's output might be. And this this is necessary in our data because the Web of Science's data includes a wide range of authorship, goes back to 1900. And, you know, those author names on the paper don't often are not in many cases don't depict an actual individual. You the name of the paper is just a string of characters. They don't actually capture an individual. And researchers may be publishing under different names, journals may be presenting names in different formats. So our work on defining an authorship and defining an author and creating a record for that author, that's work that's been kind of ongoing and you'll continue to see a lot of enhancements around that in the near term and in the long term. Can you navigate to the next slide? OK, this slide kind of jump into forward and back, so this the slide basically kind of just recaps a little bit of our work that will be delivered in Q4 or kind of previews of the work in Q4 and extends that view out into the beginning of next year, in Q1 of 2022. One of the things that I just mentioned, was this is publons peer review in the Web of Science. That publons environment is where researchers can keep track of their their recognized peer review. And what we'll be doing is kind of linking out to the peer review record for a particular article from the Web of Science. So if there is a peer review record for a particular article, we'll be linking to that. We'll be pulling that directly into the Web of Science, kind of showing that on a record. There is also something you'll see right now- I'll show it to you in a little bit- a way for users to submit feedback and monitor that feedback a little bit more, in a little more transparent way, right directly in the application, so we're kind of using a third party tool to kind of collect feedback directly and then allow research or allow users to actually go in and vote for certain things or monitor their own feedback on a regular basis. So that's something that's available now and will be added in, will be expanding on that and improving that over time as well. That third line there- work in progress: author citation map, one of the things that we have, our new technology we're building, the interface allows us to do is really expand our ability to deliver visualizations. Visualizations on the Web of Sciences have always been traditionally and we're very heavy, very resource intensive, and we have millions of users every day. So when you have millions of users trying to create a very resource intensive visualization and often you weren't able to achieve the level of visualizations that we wanted to. But we do want to bring in this idea of an author's citation map. It shows more of a citation network view of an author, probably going to see that initially around that author record where you're actually kind of viewing an author and looking at that beamplot that keeps track of that author's output at a more gran- citation impact, a more granular level. So we'll be introducing that. I believe that should be around the end of Q4 so they are developing now. A lot of work going on with that enrich cited references and citations. So the cited reference, an overview that includes kind of the position of the reference in the paper, the frequency of a reference in the paper and the section where an item might have been cited in the paper. Initially, we started that out with a small number of journals and we're going to be expanding that to up to several thousand journals in the coming quarters and hopefully continue to extend that out across many more journals across the Web of Science core collection. And we're we're taking a little bit different approach to this idea of enriched citation content, citation context than other other products are, in that we're really viewing the idea of, you know, confirming or disagreeing with a citation to something that is interesting, but not necessarily the most important piece of information when someone is viewing a citation track record. You know, things get cited for a variety of reasons, but you can really start to see how important certain citations are by looking at kind of where they're cited in the paper and the frequency they're cited within the paper that goes beyond just looking at kind of the confirmation statement around why something was cited. So that's going to be expanding out and we're looking for a lot of user feedback around that. We've done some some work on that to kind of make it a little bit more transparent, little bit easier to understand. But that's always going to be improving because we know it's it's something that's brand new and will create will take a little bit of users to kind of get more comfortable with that display, with that navigation on the citation context. A couple of other things that are going to be coming in the short term, there are a few things that we've introduced in our InCites product that are coming into the Web of Science. One of the most popular things that we've introduced in InCites is this idea of citation topics, which are- Web of Science has had citations, sorry- subject categories assigned at the journal level, and they tend to be fairly broad. We have 250 subject categories across the entire Web of Science core collection, and then the beyond that, all you have are author key words. Citation topics are is our attempts to kind of assign more granular kind of subject category designations at the item level. And we're using kind of an algorithm to kind of assign and cluster articles around a given citation topic. And that goes beyond much more granular than this 250 subject categories that are assigned to the journal level. Again, this citation topics is already in our InCites platform and it's based on the Web of Science data. But what we're going to be doing is bringing them back into the Web of Science to display on records, to make them searchable and to make them part of an analise of the data to see a little more granular feedback around a particular item. I think, like Michelle mentioned, we're expanding into funding data right. The way our we're creating this funding data in the Web of Science is going directly to the funding sources and downloading the descriptive data about a particular grant and linking that to publications. Right now, we're only dealing with about seven, I think, six different funding agencies. We have about 20 funding agencies that are on our kind of short list for inclusion and we should be adding another five funding agencies in Q4 of this year and then continuing to add more into early 2022. The biggest funding resources we're adding, I think I put them in a chat, UKRI, which includes, I think about slightly over half a million different grant records and then a range of funding resources from Canada including INFCIRC and a social science funding body in Canada. And I think that, again, includes maybe around 500,000 funding records that will be ingesting into the Web of Science and connecting to Web of Science core collection records. That data is also being passed down to InCites to allow for analysis at a more granular level of funding amounts and program assignment for particular publications. And then the last kind of column here in the work in progress, that is something that we've been working on for about six months already, is bringing preprints to the Web of Science. This is not something that we'll probably be delivering in Q4. It's probably going to be something delivered on into Q1 to Q2 of 2022, but that's the ability to kind of create a kind of a complete preprints index that lives alongside the Web of Science core collection. And the big challenge of that, I mean, the preprint data is available and indexable and we can adjust it, but some of the problems we're trying to sort out with this is actually connecting preprints to the final publication, because in many cases a preprint might be the earliest version of an item and then that item is eventually published. And we want to connect those documents together to show that maybe a publication that is in the Web of Science's core collection is connected to a preprint. So we want to create that connection. However, we also want to keep a citation track record separate so a preprint can have its own citation track record. And then that article that's eventually published could have its own unique citation track record, because they are two different documents in many cases in their citation track records to be separate. So those are some of the issues we're dealing with and the complexities around delivering a preprints index that meets the Web of Science standards. Out there in the second half of the slide where concepts in discovery. There's a wide range of things that we're doing research on right now that are kind of not in kind of current development. We have people coding on it or have developers working on it. But there are some things that are kind of the next generation of work that we're doing. One thing is kind of exciting about this organization, data management, which is basically to allow institutions to more transparency around how they're/the data for the institutions unified, so that's some work that we're- that's ongoing right now. And there are other things like author citation alerts where somebody can create a citation alert based on a complete author record. So any time an author is cited across any of their works given an alert on that, that's something else we're working on. Next slide. OK, so these next couple of slides just kind of really go into how we're going to be working with the author records of some some enhancements that are coming late this year and early next year. So the author records that kind of impact how we're treating unique researchers within our data. So there's about twenty seven million author records of a science core collection, and those kind of intersects with about 2.5 million Publons profiles. So those are kind of external to Web of Science where a researcher may have started to track their peer review record, but then ultimately start to claim their publications to their to their profile. In about 800,000 of those actually have been highly curated profiles that intersect directly with the Web of Science data. So we have this kind of very rich overview of authorship, including all these researchers and a certain percentage of those researchers- approaching a million, we're going to probably have million by the end of this year- have actually curated their publication record. So this is kind of the universe of that author record where we're dealing with and want to do is kind of bring these things together so that there's no longer really this idea of this separate Publon universe, that kind of intersection to Web of Science, want to bring this all together and make this all discoverable in the Web of Science. And so that's one of the biggest things that we're working on right now, is to kind of create this kind of very seamless integration between any researcher who may or may not have publications in the web of science and make that research are discoverable when somebody is searching for people within the Web of Science, so it's it's really merging these two environments together into a single environment and then bringing that into the Web of Science. Next slide. So you know, what that will kind of leave us with ultimately is what we hope is a very rich and valuable record of an author's output. That output could be their publication record, but could also be other things like awards, affiliation, information, reviews that they've done, grant reviews and editorships. So all of those things that are kind of external and capturing that Publons database, bringing that all together into a single profile and making that profile discoverable in the Web of Science. Some of them will have publications where a researcher may have published things in the Web of Sciences, but then may also be people who don't have publications, who just have reviews that they've written or they don't have publications in the web of science itself. So it's basically to bring those two things together and make them transparently discoverable in the Web of Science platform without having to need to go out to InCites, go out to Publons to discover one type of impact and go to the web of science to discover a different type of impact, but to bring that all together, make it discoverable right there within the web of science experience. Next slide. Yeah, so this is kind of the goals, you know, a single user profile, one management system. So it's one place you have to go between two environments to manage different elements of your impact. It's all done in one place, one search experience, just keeping it in the Web of Science so you can search for people and find all the people across the web of science. Signing in and kind of keeping that- not the the account information about like you are a person doing certain actions, keeping all that in a single location, so somebody else doesn't have to sign in across different different environments. Basically creating a single product experience that all looks and feels the same and it's intuitive and basically captures all those people who are using the Web of Science. I mean, there's many more millions of people using the Web of Science then are using Publons, and it will just make these researchers more discoverable, more available to the research universe, by bringing them all together. Next slide. OK, this just of a mock up of where we plan to end up going beyond that. So this is, you know, the first step would be bringing that researcher profile and connecting to peer review and kind of publishing information. Those are kind of elements that are already kind of tracked in in publons. So I have the record of that person's review activity and then also their published output and also their connection to publishing, meaning that they could add editorships and then kind of extending that out to connect it to grants and awards, which again, we're bringing into the web science. So those things are part of that publication record and then then moving that out to create a network of people to show their collaboration opportunities and collaboration networks that are around a researcher profile. So really kind of keeping the researcher at the center of that ecosystem and connecting them to the different types of activities that they might be involved in or types of outputs that they might have. So you we'll start you'll start seeing a lot of work in this in the research profile over the next couple of quarters, and again, that's that's where we're putting a lot of effort and energy into that that experience, trying to simplify it and make it, you know, bring it to the forefront when people are actually navigating through the Web of science product. Are there any questions, I've been talking for about 15 minutes, I guess. Any questions or any comments in chat? The chat screen. I don't know if we can, if people can unmute themselves to ask questions if there's anything. "Since you started the merge researcher I.D. and Publons, a widget disappeared, the pull publications in Orchid from the Web of Science. You planned to bring that widget back?" That may come back. But right now, the what we're kind of it- may come back basically as publons kind of comes into web of science- but we're at the building right now, we have built right now is the ability to kind of keep the publons record in sync with Orchid. So it's no longer going direct from Web of science into Orchid, it is going, you know, records collected in publons and then keeping that automatically synched with with Orchid, which I think that what what more what users prefer is that I just dump it into punlons and then automatically goes into Orchid without having to save from web of science to Orchid. And then I think the process was pushed from Orchid into researcher ID. So it was it was a little bit of a multi-step process that didn't kind of follow what people really wanted to do. But we're going to keep looking at that as well. And if there does come a time when we need to kind of make a more direct connection to Orchid, we may we may explore that option as well. Um, we'll go to the next slide, I think there was one more thing in here, a couple of things in here. OK, so one of the other things that we do work on, that is available to institutions who subscribe to the Web of science, are APIs and we've spent a lot of effort and energy kind of enhancing our API experience over the past couple of years. We just released a new journals API, which allows access to a lot of the underlying data that's released in the in the journal citation reports for the first time, so it's kind of a direct API that allows to integrate with that general information. Simplified download experience, you work directly with the APIs and kind of enhance how people can kind of connect those APIs with certain scripts to work on. Easier integration points and then just continue to enhance our documentation. So, you know, if people are using our APIs, we have more APIs that are on offer and we're just trying to improve that experience of using the APIs, those that are both native to the Web of Science and those that are maybe an additional subscription. OK, this is another question, "is there a plan to integrate insitutions' proxy links preferences/prefixes into query and article links?" That's something that actually we are taking a very close look at. We can integrate institutions' proxy information into alerts. So if an institution wants that where there are alerts that are delivered, those alerts will have the proxy information embedded into the inbound URL link. We can do that today. It is a special request, takes a little bit of special works, but it is something that we can do. The integrating of the proxy information into queries or direct article links for sharing, that's kind of a double edged sword because if we do integrate the proxy information into search histories that can be shared or articles that can be shared, what that means is that it can't be shared outside of that institution because all those links will direct the person back to a proxy sign-in and that that proxy sign-in may not be available to every user that the link might be shared with. So we're trying to find a balance there, maybe, you know, opportunity to share with proxy and without proxy, because, again, the sharing of the query right now is neutral. I could create a query link that is shareable and I can share it with any other Web of science user on your campus or not, somebody has proxy information or doesn't, and that that link should work for that user. But we bring the proxy information and proxy information in that it does kind of limit it down to those people who have proxy credentials that you can do that are part of the shareable universe. Next slide. This is, again, we'll share this with you, this is going to be the fields that are available for the the Web of Science APIs. Again, just a good reference source for kind of what things are available for creating queries for inputs into the APIs. Again, we'll share this with you. And after that, after the information, because this kind of gives you a universe of how valuable APIs can be because how you might be able to get around that automatic API queries across our data. And next slide, we're getting... Next slide, you can skip over this one. More APIs. So this is the Converis API road map, I can't speak at a high level detail of this. I'm not part of the development team, but we'll share this with you after the session. It kind of gives you some overview of what we're looking at for the ongoing development of our APIs. And again, there's there something that we are investing in because there's a lot of value that teachers are getting out of the integration with their existing APIs. And again, a roadmap for future developments is pretty important to the users of our API. So we'll share this with you after that, after the session. And next slide. OK, so again, for insights and benchmarking analytics, Michelle mentioned that we've done a kind of reimagining of the JCR interface and also delivering new metrics for different journals that have never had citation metrics in the past. We're also doing similar sorts of things for insights, benchmarking and analytics. So, you know, impact profiles, citation topics, which I mentioned earlier, which are those clusters of publications around a particular topic, defining author positions in particular papers, and then a more intuitive interface. Those are all recent enhancements that have come out in the insights benchmarking. So that kind of living alongside the journal citation reports platform. And we're spending a lot of time and effort kind of satisfying different needs for analysis that insights provides for. And next slide. And then this is just the ongoing road map for insights and JCR. Well you start, you'll see again more developments across JCR and insights in the coming quarters as well. And I believe that's it. Yes, that is it. And so one of the things I think I wanted to end with is just have a quick demo just so we can kind of look at some of the features of the Web of Science. And I believe I'm going to be able to share my screen now. And you should be seeing my web of science interface.


April Levy Yes.


Don Sechler Excellent. OK, so again, this is the homepage for the Web of Science. A couple of things I just want to point out just about this one page, one of them that you can always go back to the classic version. If there's something you knew and loved available there that we haven't yet brought over, that classic version is still available. And we'll just start a session for you in the classic version. So that's available from this dropdown. Also, the other products may be available are in this products dropdown. This kind of mirrors what you might see in a Google product where you see the other products that are available in this little dropdown and can navigate to those. In the bottom right hand corner, this is where we have an integration with a utility called Pendo. And Pendo is what you'll start, you'll see that provides kind of user guides and guided tours that you'll encounter across the product. In fact, the first time you come in here, you may see a couple different things that can intervene in the session to ask if you want a guided tour. We also kind of include this since there's new features. We've made a significant enhancement or brought in a significant change to a workflow. You'll see a guided tour that intervenes or a product update that will pop up on the page to ask if you want more information. But one of the things I want to point out here is also suggest a feature which I think I need to be signed in for. I do. If I want to submit a request, I will need to be signed in, but I can actually submit feedback directly in this form. And then from this form I can navigate into the the portal for feedback, which allows me to see what we're working on. So I can see what is planned for development, all based on user feedback and in what is also in rebuilding, which is basically stuff that will be in upcoming releases. And then in my dashboard for feedback, I can see all the different feedback that's been submitted. We are actually working on getting anything else that's awaiting feedback, typically those are things we've got in the past two weeks and we are trying to evaluate those things and and get them up to date it to date pretty quickly. And then also what's new. So these things kind of describe things that we've just recently released. So this ability to submit a correction on a record label like record level, this is something that was in the classic gloss that we just recently brought back into the current version. So this link in the bottom right hand corner, just this is again, where you'll see links to our help, but also this ability to kind of get to feedback, monitor feedback and suggest feedback right directly into the interface. The interface itself, again, this should be fairly straightforward, a lot of the features that you had in the past are still available here. When you do land on the product you're landing in the search of the Web of Sciences core collection. You can select any unique index to search for any one index at a time to be searched. So you do have some ability to define your search the front end by selecting what indexes you want to search across. Your search initially, this is one change we did make, it defaults to all fields. So it's a kind of broadest search of our data, search across everything that's turned up on the science record. The classic interface defaulted to topic with search just titles, keywords and abstracts. You can make that change or keep it as an all fields search. Forming a search. One thing that you may notice is very, very quick, you know, in a very broad search of millions of records, very fast return of the search results, we've added a few new quick filters into the upper left hand panel. So you can very quickly filter just review articles, just the access articles, just those articles that are open access or just those articles that have associated data and that associated data does require a subscription to the data citation index. So, again, just adding in those quick filters to kind of get to some of the hot things that are unique in the Web of science. As far as I'm editing a search, it's very quickly possible to add in new terms, especially in the field. I don't have to go back to the home search page. I can just go straight into that search and add an additional terms and just know that reflects my search. As I'm searching every search that I do, every record that I look at, we'll start to see kind of building across a cookie trail across the top. And if I need to go back to any point in this navigation, I can jump back to that earlier set of results just right here across the top. All of this navigation is also tracked as part of my history. So here in my search history, I see all the navigation that I've taken up at this point. Now one of the things about our search history is that it if I am in a signed-in state, if I'm working as a signed-in user, that search history will be automatically saved. So I don't have to take any actions to save a history. So as I'm searching, it's building out that history, and I can see that my session on Friday, August 20th, on Thursday, August 12th, on the 10th, all of those searches are safe for me. Some of them may be alerts, some of them may not be alerts. But all of the search and navigation that I've taken while I am signed in is available to me so I can actually jump back to things I did in the past. And it saved passively - I'm signed in, it's just automatically saved. Creating query copying query links, this has been also a very popular process where I can copy a query link and then share that with anybody. It's copied, I can paste into a chat screen. I send it in an email and anybody can paste that query in and will just be able to run that that particular query that I just copied. So it's not impressive because I just did it in the same browser window, but basically that copy query link is, you know, pretty, pretty nice, you can do it from any search interface or search that you're performing and also from your history in any of the previously saved queries I can copy as well. I can copy a direct article link or copy the query link, and again, these do not have the proxy information encoded in there. So that can be shared outside of your institution. If they are shared with somebody who is at your institution on your IP range, they can open it up and run it automatically. And if you shared with somebody who's off campus, they can a user can sign in with roaming access to the product if they've personalized in the past to get access to that serves. So those are just some of the kind of the more popular features that we've added in, again, advanced search is still there. You have all the field tags, including more field tags that that were available in the past. So things like the funding details, all that funding information that were captured from all the outside grant organizations, including grant titles, grant description's primary investigator, co-primary investigative data. All of that is searchable in advance search under the funding details tag. So that's something else that's brand new and which is just part of the new Web of science interface, it is not part of the classic version. I'm going to... Anymore questions in chat? Chat screen. I know we wanted to grant some time for Q&A, and I think I've got a little bit over time. Are there any questions or anything that we that anybody would like to see? Michael or Michelle is there anything that you would like to add?


Michelle Fleetwood No, not for me, Don, thanks.


Michael Bragg And not for me, thank you, Don.


Don Sechler Are there any questions from the attendees? And if you can't think of any questions now, use that customer feedback portal to kind of make suggestions or ask questions if you have an idea or wonder where something has been moved to, go ahead and use that that that feedback portal in the product to ask those questions.


April Levy There's another question in the chat.


Don Sechler "So are you going to bring subjects to web of science, not just InCites?" So yes, that that is part of the citation topics that we are bringing into the Web of Science. So, again, we have those citation topics that are kind of more granular subject categories, kind of groupings of articles around a given topic that are much more much more direct and applied to the topic of the paper, rather than just that broad subject category that you get at the journal level. So, yeah, that is something we'll be bringing to the Web of Science. I believe we should have those in Q4 of this year. Those are already kind of in our data. We're just kind of bringing them in in a way that's displayed and searchable in the web of science core collection. "Will this topic be a controlled vocabulary?" So they're not a thesaurus, they are a the vocabulary is kind of built by the algorithm that kind of creates those topics. So it's there is some management of a vocabulary, but it's not a it's not a hierarchical. There is a small hierarchy to this thesaurus. I take it back, but it's not a traditional controlled vocabulary like a mesh heading or a biosis concept code. So the vocabulary does change due to the algorithm, basically what's yeah, what's available, and, yeah, soon the management, that is something that one of the things that is been a part of this is they've tried to make these categories distinct, so you're not getting a lot of overlap between categories. So that's something that they're trying to manage in the algorithm. There's always going to be some things that are, you know, that there is overlap that can't be managed, but there is something for they're trying to keep kind of topics as distinct. We'll be when these things come to the Web of Science to get a lot more information about them, because they're you know, they're in a very kind of managed environment and the kind of very managed way in InCites, but start bring them to Web of Sciences and that kind of open and searchable and maybe cross-linkable and will have to be a lot more explanation about how they're actually functioning. So we will be adding a lot of details around those those kind of new subcategories schema when we when we do introduce this into the product. Yeah, so one of the things Michael just reminded me of is there is a some changes to merging source in citation index that are planned this year. So a merging source in citation index up to this point has been a it's a file that kind of lives alongside the Web of Science's core collection and includes journals that have met are our criteria for conclusion on quality, but not on citation impact. That file will be turning into a five year rolling file at the beginning of 2022. So right now, it includes it's a kind of a growing file from 2015 forward. But it we're transitioning that into a rolling five year file because it's gotten very large, actually, and the inclusion of a number of journals in that file. Backfiles of the file are a can be purchased and a purchase of that backfile does kind of transition the file into a fully growing file. So if somebody invests in the backfile or has invested in the backfile, then they file just continues to grow as long as you maintain a web of science subscription. But if you haven't invested in the backfile, then that emerging source of citation index front file will be kind of changing into a five year rolling file at the beginning of the new year. That file is if you haven't purchased the backfile, the front file is entirely gradus with the maintenance of a Web of science core collection subscription. So there's no investment in the additional investment to maintain that Web of Science or front file, its just the backfile that is an ongoing investment. I know we've gone way over time, I'm thankful for everyone's patience here, lots a lot to talk about, a lot of new things coming.


April Levy Well, thank you very much, Don, and thank you, Michelle and Michael. We appreciate getting this very detailed and thorough update. And as mentioned earlier, we have recorded the session, so we're going to make that available to our members as well.


Michael Bragg And we can provide you a copy of the deck that was used, so you can include that with the the recording for anybody that wants to revisit this or share with colleagues that were unable to attend or if you have any questions afterwards, you can always reach out to us. We're happy to field any inquiries that you have and keep those lines of communication open.


April Levy OK, thank you all again so much. We really appreciate it.


Michael Bragg Thanks, April,.


Don Sechler Thanks April, bye everybody.


April Levy OK, everyone going to end this event now.


User login

Enter your OhioLINK staff username.
5 + 0 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.