Planet Drupal

Syndicate content
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 2 hours 35 min ago

Promet Source: The SEO and UX Connection

Wed, 02/26/2020 - 18:33
In our current, digitally driven business climate, search engine optimization (SEO) and optimal web experiences are inherently intertwined. Each feeds off and builds upon the other. 

Drudesk: Integrate social media on your website via helpful Drupal 8 modules

Wed, 02/26/2020 - 17:15

New days dictate new rules on the web. Having social media integration buttons on your website today is one of the crucial web design tips to ensure your business success.

If you have a website on Drupal, this post will be of special interest to you. We will discuss how social media integration works in Drupal 8, what modules there are in this area, and how to integrate social media on your website using one of them — the Easy Social Drupal module.

Droptica: Schema.org And Metadata in Drupal

Wed, 02/26/2020 - 14:24
Schema.org metadata is one of the most important SEO optimisation issues. This is what the extensions of HTML documents that allow search engine robots to better understand the meaning of individual subpages of your website are being called. Handling the visitor's metadata in Drupal already since version 7. In this article, I will present different ways to implement them.

Lullabot: Sending a Drupal Site Into Retirement Using HTTrack

Wed, 02/26/2020 - 13:58

Maintaining a fully functional Drupal 7 site and keeping it updated with security updates year-round takes a lot of work and time. For example, some sites are only active during certain times of the year, so continuously upgrading to new Drupal versions doesn't always make the most sense. If a site is updated infrequently, it's often an ideal candidate for a static site. 

Mediacurrent: Open Waters Ep 11: Enterprise Marketing with Lynne Cappozi

Tue, 02/25/2020 - 20:00

 

In this episode, we're joined by Lynne Cappozi, Acquia’s CMO.  Lynne weighs in on how to maximize your investment in Acquia products, top digital marketing challenges, and how open source is changing the game for marketers. 

About Lynne

Lynne is one of Acquia’s boomerang stories, first serving as CMO in 2009 and returning to Acquia in 2016 to lead the marketing organization into its next stage of growth. Prior to her experience at Acquia, Lynne held various marketing leadership roles in the technology space at companies such as JackBe, Systinet & Lotus Development, all of which were acquired during her tenure. Outside of her work at Acquia, Lynne is on the board of directors at the Boston Children’s Hospital Trust and runs a nonprofit through the hospital.


Audio Download Link

Project Picks
  1. CVent
  2. GoGoGrandparent
Interview
  • Tell us about yourself and your role at Acquia.
  • What does Acquia do?
  • How has open source changed the practice of marketing for Acquia’s customers?
  • What kind of organizations make up Acquia’s customer base?
  • Being a marketer yourself, what do you see as the biggest challenge for enterprise marketers as we head into 2020?
  • What is Acquia doing to help marketers overcome those challenges?
  • Where do digital agencies like Mediacurrent fit into Acquia’s ecosystem?
  • What can marketers do to get the most value out of their investment in Acquia products?


Thanks for tuning in for another episode of Open Waters!  Looking for more useful tips, technical takeaways, and creative insights? Visit mediacurrent.com/podcast to subscribe and hear more episodes.

 

Tag1 Consulting: Insider insights on the commercial and API landscapes (part 1)

Tue, 02/25/2020 - 19:59
Over the last five years, decoupled Drupal has grown from a fringe topic among front-end enthusiasts in the Drupal community to something of a phenomenon when it comes to coverage in blog posts, tutorials, conference sessions, and marketing collateral. There is now even a well-received book by this author and a yearly conference dedicated to the topic. For many Drupal developers working today, not a day goes by without some mention of decoupled architectures that pair Drupal with other technologies. While Drupal’s robust capabilities for integration are nothing new, there have been comparatively few retrospectives on how far we’ve come on the decoupled Drupal journey.Read more preston Tue, 02/25/2020 - 19:27

Mediacurrent: Mediacurrent Sessions at DrupalCon 2020

Tue, 02/25/2020 - 18:19

DrupalCon 2020 sessions are here! The Mediacurrent team is proud to present 9 sessions at this year’s annual conference in Minnesota. 

With topics ranging from Drupal 9 to personalization to tips for preventing burnout, we’ll be sharing our Drupal knowledge from all angles. Here’s the presentation line up: 

Site Building, Development, and Coding

Page building showdown: Paragraphs v Layout builder
Presented by: Jay Callicott, VP of Technical Operations  

Join this session for an honest comparison of the current champ in Drupal 8 contrib, Paragraphs, versus Layout builder. 

Managing images in large scale Drupal 8 & 9 websites
Presented by: Mario Hernandez, Head of Learning 

Knowing how to properly configure your site to handle images can make a big difference in converting leads, getting more sales and getting more visitors on your site.

MagMutual.com: On the JAMStack with Gatsby and Drupal 8
Presented by: Bob Kepford, Director of Development and Ally Delguidice-Bove, Digital Strategist. Plus Sanjay Naruda, MagMutual and Ben Robertson, Gatsby

This session will be an inside look at our decoupled approach for MagMutual.com: combining open-source frameworks like Gatsby, Drupal 8, and Serverless, as well as third-party services for user management, a learning management system, and private APIs to build a robust custom platform.

Being Human, Contributions, and Community

How to plug into your passion and prevent burnout
Presented by: Brian Manning, IT Operations Manager and Victoria Miranda, Project Manager

Learn how to identify burnout — for yourself, your team, and your project.

Creating an organizational culture of giving back to Drupal
Presented by: Dave Terry, Co-founder and Partner 

Explore Mediacurrent’s journey around creating a culture of giving back and get inspired with actionable ideas.

Content and Digital Marketing 

Contextual, not creepy: Personalization tools, tricks, & tips 
Presented by: Ally Delguidice-Bove, Digital Strategist 

In this session, see how Empathy Mapping can help you create contextual and personal experiences for your users. 

Digital psychology & persuasion to increase user engagement
Presented by: Cheryl Little, Senior Director of User Experience; Becky Cierpich, UX/UI Designer; Danielle Barthelemy, Senior Digital Strategist  

Explore the psychological principles that drive human behavior and learn about the tools and techniques that can be used to captivate visitors' attention and enhance the user experience on your website. 

Leadership, Management, and Business

From tech expert to team leader: Lessons for making the leap
Presented by: Kelly Dassing, Senior Director of Project Management and Mark Shropshire, Senior Director of Development  

If you’re in the process of transitioning from a technical role to management, this session is for you! 

User Experience, Accessibility, and Design 

One usability step at a time: Improve your site with a UX audit
Presented by: Cheryl Little, Senior Director of User Experience and Becky Cierpich, UX/UI Designer

Start off on the right foot when planning website improvements. See a UX audit can help.

Summit and Training Events

Register to join Mario Hernandez and Eric Huffman for a tutorial on  Component-based theming with Twig. If you're in the healthcare field,  be sure to join the Healthcare Summit where Mediacurrent is a presenting sponsor. 

Electric Citizen: Get Ready for DrupalCon Minneapolis

Tue, 02/25/2020 - 17:02

After over a decade of our team traveling to other cities, the annual DrupalCon North America is coming to our hometown! 

I've been attending DrupalCons each year, starting with DrupalCon Chicago in 2011. While I've had an incredible time visiting all these other great cities across the US, there's something special about being the host city. And I'm confident you'll love it too.

Here's your short to-do list:

Palantir: Navigating Complex Integrations, Part I: Understanding the Landscape

Tue, 02/25/2020 - 13:00

In the first part of this two-part series, we explore the factors that drive complexity when integrating third-party data sources with large-scale digital platforms

We use the word integration a lot when we’re talking about building large-scale digital platforms. The tools we build don’t stand in isolation: they’re usually the key part of an entire technology stack that has to work together to create a seamless user experience. When we talk about “integrating” with other services, usually we’re talking about moving information between the digital platform we’re building and other services that perform discrete tasks.

Platforms are not encapsulated monoliths. For just about any feature you could imagine for a platform, there may be a third-party service out there that specializes in doing it and you can optimize (for cost, output, functionality, ease of use, or many other reasons) by choosing to strategically integrate with those services. When we architect platforms (both big and small), we’re usually balancing constraints around existing vendors/service providers, existing data sets, and finding cost and functionality optimizations. It can be a difficult balancing act!

Some examples of integrations include:

  • On a healthcare website, clicking the “Make an Appointment” button might take you to a third-party booking service.
  • On a higher-education website, you might be able to view your current class schedule, which comes from a class management system.
  • On a magazine site, you might not even know that you’re able to read the full article without a paywall because the university network you’re browsing from integrates with the publisher’s site to give you full access.
  • On a government website, you might be able to see the wait time for your local Department or Registry of Motor Vehicles.

In short: an integration is a connection that allows us to either put or retrieve information from third-party data sources.

What Drives Complexity in Integrations

The main factors that drive complexity in integrations:

  1. Is it a Read, Write, or Read/Write integration?
  2. What is the data transportation protocol?
  3. How well-structured is the data?
  4. How is the data being used?
Read, Write or Read/Write

When we talk about reading and writing, we’re typically talking about the platform (Drupal) acting on the third party service. In a Read-only integration, Drupal is pulling information from the third-party service and is either processing it or else just displaying it along with other information that Drupal is serving. In a Write-only integration, Drupal is sending information to a third-party service, but isn’t expecting processed data back (the services will often send status messages back to acknowledge getting the data, but that’s baked into the process and isn’t really a driving factor for complexity). The most complex type of integration is a Read/Write integration: where Drupal is both writing information to a third-party service and also getting information back from that third-party service for processing or display.

Access Control

It is impossible to separate the idea of accessing information from the question: is the data behind some type of access control? When you’re planning an integration, knowing what kind of access controls are in place will help you understand the complexity of the most basic mechanics of an integration. Is the data we’re reading publicly accessible? Or is it accessible because of a transitive property of the request? Or do we have to actively authenticate to read it? Write operations are almost always authenticated. Understanding how the systems will authenticate helps you to understand the complexity of the project.

Transportation Protocol

In thinking about the transportation protocol of the data, we expand this definition beyond the obvious HTTP, REST, and SOAP to include things like files that are FTP’ed to known locations and direct database access where writing our own queries against a data cache. The mechanics of fetching or putting data affect how difficult the task can be. Some methods (like REST) are much easier to use than others (like FTP’ing to a server that is only accessible from within a VPN that your script isn’t in).

REST and SOAP are both protocols (in the wider sense of the word) for transferring information between systems over HTTP. As such, they’re usually used in data systems that are meant to make information easy to transport. When they’re part of the information system, that usually implies that the data is going to be easier to access and parse because the information is really designed to be moved. (That certainly doesn’t mean it’s the only way, though!)

Sometimes, because you’re integrating with legacy systems or systems with particular security measures in place, you cannot directly poll the data source. In those cases, we often ask for data caches or data dumps to be made available. These can be structured files (like JSON or XML, which we’ll cover in the next section) that are either made available on the server of origin or are placed on our server by the server or origin. These files can then be read and parsed by the integrating script. Nothing is implied by this transportation method: the data behind it could be extremely well structured and easy to work with, or it could be a mess. Often, when we’re working with this modality, we ask questions like: “how will the file be generated?”, “can we modify the script that generates the file?”, and “how well structured is the data?”. Getting a good understanding of how the data is going to be generated can help you understand how well-designed the underlying system is and how robust it will be to work with.

Data Structure

When data folks use phrases like “data structure,” I think there’s a general assumption that everyone knows exactly what they mean. It is one of those terms that seems mystical until you get a clear definition, and then it seems really simple.

When we talk about the data structures in the context of integrations, we’re really talking about how well-organized and how small the “chunks” of data are. Basically: can you concisely name or describe any given piece of information? Let’s look at an example of a person’s profile page. This could be a faculty member or a doctor or a board member. It doesn’t matter. What we’re interested in finding out is this: when we ask a remote system to give us a profile, it is going to respond with something. Those responses might look like any of the following:

  • Composed (“pre-structured”) response: Name: Jane Doe, EdD. Display Name: Samantha Doe, EdD
  • Structured Response: First Name: Samantha Last Name: Doe Title: EdD.
  • Structured Response, with extra processing needed: First Name: Jane Last Name: Doe Title: EdD. Preferred Name: Samantha

All of these responses are valid and would (presumably) result in the same information being displayed to the end user but each one implies a different level of work (or processing) that we need to do on the backend. In this case: simpler isn’t always better!

If, for example, we’re integrating with this profile storage system (likely an HR system, or something like that) so we can create public profiles for people on a marketing site, we may not actually care what their legal first name for HR purposes is (trust me, I’m one of those folks who goes by my middle name—it’s a thing). Did you catch that in the third example above this person had a preferred name? If the expected result of this integration is a profile for “Samantha Doe, EdD.”, how do we get there with these various data structures? They could each require different levels of processing in order to ensure we’re getting the desired output of the correct record.

The more granularly the information is structured, the easier it is to process for the purpose of changing the information.

At the other end of the spectrum is data that will require no processing or modification in order to be used. This is also acceptable and generally low complexity. If the origin data system is going to do all of the processing for us and all we’re doing is displaying the information, then having data that is not highly granular or structured is great. An example of that might be an integration with a system like Twitter: all you’re doing is dropping in their pre-formatted information into a defined box on your page. You have very little control over what goes in there, though you may have control over how it looks. Even if you can’t change the underlying data model, you can still impact the user’s experience of that information.

The key here, for understanding the complexity of the integration, is that you want to be on one extreme or the other. Being in the middle (partially processed data that isn’t well-structured) really drives up effort and complexity and increases the likelihood of there being errors in the output.

Data Usage

One of the best early indicators of complexity is the answer to the question “how is the data being used?” Data that is consumed by or updated from multiple points on a site is generally going to have more complex workflows to account for than data that is used in only one place. This doesn’t necessarily mean that the data itself is more complex, only that the workflows around it might be.

Take, for example, a magazine site that requires a subscription in order to access content (i.e., a paywall). The user’s status as a subscriber or anonymous user might appear in several different places on the page: a “my account” link in the header, a hidden “subscribe now!” call to action, and the article itself actually being visible. Assuming that the user subscription status is held in an external system, you might now be faced with the architectural decisions: do we make multiple calls to the subscription database or do we cache the response? If the status changes, how do we invalidate that cache throughout the whole system? The complexity starts to grow.

Another factor in the data usage to consider is how close the stored data is to the final displayed data. We often refer to this as “data transformations.” Some types of data transformations are easy and while others push the bounds of machine learning.

If you had data in a remote system that was variable in length (say, items in a shopping cart), then understanding how that data is stored AND how that will be displayed is important. If the system that is providing the data is giving you JSON data, where each item in the cart is its own object, then you can do all kinds of transformations with the data. You can count the objects to get the number of items in the cart; you can display them on their own rows in a table on the frontend system; you can reorder them by a property on the object. But what if the remote system is providing you a comma-separated string? Then the display system will need to first transform that string into objects before you can do meaningful work with them. And chances are, the system will also expect a csv string back, so if someone adds or removes an item from their cart, you’ll need to transform those objects back to a string again.

All of this is rooted in a basic understanding: how will the data I’m integrating be used in my system?

Come back on Thursday for Part 2, where we’ll provide a framework to help you make sure you’re asking the right questions before embarking on a complex integration project. 

Complex Adaptive System by Richard Ricciardi licensed under CC BY-NC-ND 2.0.

Strategy Industries Government Healthcare Higher Education

Microserve: A commitment to quality: Creating a robust QA process

Tue, 02/25/2020 - 10:48
A commitment to quality: Creating a robust QA process Joe Bransby Tue, 02/25/2020 - 09:48

Having been working in software quality assurance for over 10 years, I have helped many organisations to set up internal QA teams and a vigorous QA process from scratch.  It’s both a rewarding and challenging task! 

As part of Microserve’s commitment to producing high-quality work for our clients, I joined the team in 2018, tasked with building a QA team and creating a suite of processes. One of our values as a business is ‘excellence as standard’. Prioritising quality in this way would ensure that we would provide our clients with the excellence we strive for. 

Agiledrop.com Blog: Developer guide to better UI/UX design

Tue, 02/25/2020 - 08:34

In this post, I'll share the basic principles of UI/UX design that I follow as a developer while working on projects which have little or no designs prepared by the client. I hope they'll help you to optimize your workflow and lead to the greater satisfaction of your clients.

READ MORE

Blue Drop Shop: Drupal Recording Initiative: #DrupalCampNJ and #FLDC20

Mon, 02/24/2020 - 22:48
Drupal Recording Initiative: #DrupalCampNJ and #FLDC20 kthull Mon, 02/24/2020 - 15:48

I post updates on LinkedIn and to backers of the Drupal Recording Initiative, but I suppose blasting these via Planet Drupal is also a good idea. Well, at least until Drupal.tv adds that functionality (you can track that issue here).

Enjoy!

 

That puts the total number of captured sessions at 2,147. If you find these session recordings valuable, please consider supporting my efforts. The US is fairly well covered...so now it is time to focus on the rest of the world.

Drupal.org blog: Request for Sponsors: Automatic Updates Initiative Phase 2

Mon, 02/24/2020 - 22:40

The Drupal Association is seeking partners to help us advance the next phase of the Automatic Updates initiative.

The first phase of this work was generously sponsored by the European Commission, and supported by other partners including: Acquia, Tag1Consulting, Mtech, and Pantheon.

In this first phase, we accomplished a great deal:

  • Display of security PSAs directly in Drupal's admin interface
  • Automated readiness checks, to ensure that a site is prepared for updates
  • Automatic updates for Drupal Core in both Drupal 7 and Drupal 8.

But while this work laid the foundation, a great deal of work yet remains. The next phase hopes to add support for:

  • Sites managed using Composer
  • Automatic updates with Contributed modules
  • A front-end controller providing support for easy roll-back

The Drupal Association needs partners in order to move this work forward. We're looking both for organizations who can provide financial support, and teams who have expert developers who can contribute to development.

If you are interested, you can find a detailed scope of the remaining work attached to this post.

Download the Request for Sponsors

Contact: tim@association.drupal.org with questions.

Drupal Association blog: Drupal contribution culture - your opinions, experience and perspectives matter

Mon, 02/24/2020 - 22:06

How do we encourage those capable of giving back to Drupal to start doing so and once they are contributing how do we encourage them to do more? Dries highlighted this conundrum during his keynote at DrupalCon Amsterdam 2019.

Whilst various mechanisms exist to recognise contributions in Drupal, if we are to cultivate and grow contribution culture we need to move beyond the current status quo. At DrupalCon the Contribution Recognition Committee was proposed and self nomination invited.

The purpose of this committee is to recommend solutions for how we recognize contributions to the Drupal project made by both individual and organizational contributors, and to advise the Drupal Association on how to weight each type of contribution relative to the others.” Tim Lehnen, Chief Technology Officer Drupal Association.

What have we achieved so far?

For several months now, newly appointed committee members have been researching and discussing contribution culture within Drupal and open source. To ensure recommendations are truly representative of organisation and individual contributions we are keen to canvas opinions and perspectives from far and wide.

Share your opinions, ideas and perspectives

An online survey is available now for those using or contributing to Drupal so they can provide insights which will be considered in our recommendations as a committee. I encourage you to participate and help us to reach members of the Drupal community in your local area.

Complete the survey today

Mediacurrent: A Recipe for a Graphql Server in Drupal Using graphql-php

Mon, 02/24/2020 - 21:44

Have you ever wanted to interact with Drupal data through a GraphQL client? Lots of people do. Most of the time, the Drupal GraphQL module is the tool that you want. It is great for things like:

  • A React JS app that shows a catalog of products
  • A Gatsby blog
  • Building an API for many different clients to consume

However, there is one case that the Graphql module does not cover: building a Graphql schema for data that is not represented as a Drupal entity.

The Graphql module maintainers decided to only support entities. There were two big reasons for this:

  1. Under normal circumstances, just about every piece of your Drupal content is an entity.
  2. The graphql-php symfony package is a good enough abstraction layer for exposing other types of data.
     

In this article, I will be discussing how to implement a custom graphql-php endpoint and schema for interacting with a custom, non-entity data source.

Why?

If you’ve gotten this far, you may want to ask yourself “why is my data not an entity?” There are a few acceptable reasons:

Performance

Is part of your use case inserting tons of records at once? In this case, you may not want your data to be a Drupal entity. This will let you take advantage of MySQL bulk inserts.

Inheritance

When you came to the site, was the data already not an entity? Unfortunately, this often justifies keeping it that way rather than doing a time-consuming migration.

Implementing graphql-php

The graphql-php docs are pretty good. It is not much of a leap to implement this in Drupal.

Here is a summary of the steps:

  1. Install graphql-php with composer
  2. Set up a Drupal route to serve Graphql
  3. Establish a Graphql schema
  4. Establish the resolver and its arguments
  5. Execute and serve the Graphql Response
Step One: Set up a Drupal route to serve Graphql

We’ll start with a basic Drupal controller.

<?php namespace Drupal\my_graphql_module\Controller; use Drupal\Core\Controller\ControllerBase; use Symfony\Component\HttpFoundation\Request; class MyGraphqlController extends ControllerBase { /** * Route callback. */ public function handleRequest(Request $request) { return []; } }


You don’t need to approach this differently than a normal Drupal route.

Here is what the route definition might look like in my_graphql_module.routing.yml:
 

my_graphql_module.list_recipient_graphql: path: '/list-recipient-graphql' defaults: _title: 'List recipient graphql endpoint' _controller: '\Drupal\my_graphql_module\Controller\ListRecipientGraphql::handleRequest' methods: [POST] requirements: _list_recipient_graphql: "TRUE"


A few things to note:

  • It is wise to restrict the route to allow only the POST method since that is how Graphql clients send queries.
  • The _list_recipient_graphql requirement would be an Access Service. Any of the other Drupal route access methods would also work.

    Assumptions - Your Data, and How You Want to Access it

    For this tutorial, I’ll assume your data is a simple mysql table similar to this:
     

+-----------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------+----------------+ | contact_id | int(10) unsigned | YES | MUL | NULL | | | list_nid | int(10) unsigned | NO | MUL | NULL | | | status | varchar(256) | NO | | NULL | | | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | email | varchar(256) | NO | | NULL | | +-----------------+------------------+------+-----+---------+----------------+


In the real world, this roughly translates to a record that links contacts to mailing lists. You can see now why we would want to insert lots of these at once! Let’s also say that you have a React component where you would like to use the Apollo client to display, filter and page through this data.

Step Three: Establish a Graphql Schema

We can start with a relatively simple Graphql schema. See below (note, the resolver is blank for now):
 

<?php namespace Drupal\my_graphql_module\Controller; use Drupal\Core\Controller\ControllerBase; use Symfony\Component\HttpFoundation\Request; use GraphQL\Type\Schema; use GraphQL\Type\Definition\ObjectType; use GraphQL\Type\Definition\Type; class MyGraphqlController extends ControllerBase { /** * Route callback. */ public function handleRequest(Request $request) { // The schema for a List Recipient. $list_recipient_type = [ 'name' => 'ListRecipient', 'fields' => [ 'email' => [ 'type' => Type::string(), 'description' => 'Recipient email', ], 'contact_nid' => [ 'type' => Type::int(), 'description' => 'The recipient contact node ID', ], 'list_nid' => [ 'type' => Type::int(), 'description' => 'The recipient list node ID', ], 'name' => [ 'type' => Type::string(), 'description' => 'Contact name', ], 'id' => [ 'type' => Type::int(), 'description' => 'The primary key.', ], ], ]; $list_recipients_query = new ObjectType([ 'name' => 'Query', 'fields' => [ 'ListRecipients' => [ 'type' => Type::listOf($list_recipient_type), 'resolve' => function($root_value, $args) { // We'll fill this in later. This is where we actually get the data, and it // depends on paging and filter arguments. } ], ], ]); $schema = new Schema([ 'query' => $list_recipients_query, ]); } }


You will notice that the ListRecipient Graphql type looks pretty similar to our database schema. That is pretty much its job - it establishes what fields are allowed in Graphql requests and it must match the fields that our resolver returns.

Step Four: Resolver and Arguments

In this step, we will add the resolver and the argument definition. Here it is:
 

<?php namespace Drupal\my_graphql_module\Controller; use Drupal\Core\Controller\ControllerBase; use Symfony\Component\HttpFoundation\Request; use GraphQL\Type\Schema; use GraphQL\Type\Definition\ObjectType; use GraphQL\Type\Definition\Type; use GraphQL\Type\Definition\InputObjectType; use Symfony\Component\DependencyInjection\ContainerInterface; use Drupal\my_graphql_module\ListRecipientManager; class MyGraphqlController extends ControllerBase { /** * The recipient manager * * It's usually wise to inject some kind of service to be your * resolver - though you don't have to. * * @var \Drupal\my_graphql_module\ListRecipientManager */ protected $recipientManager; /** * The ContactSearchModalFormController constructor. * * @param \Drupal\my_graphql_module\ListRecipientManager $recipient_manager * The recipient manager. */ public function __construct(ListRecipientManager $recipient_manager) { $this->recipientManager = $recipient_manager; } /** * {@inheritdoc} * * @param \Symfony\Component\DependencyInjection\ContainerInterface $container * The Drupal service container. * * @return static */ public static function create(ContainerInterface $container) { return new static( $container->get('my_graphql_module.list_recipient_manager'), ); } /** * Route callback. */ public function handleRequest(Request $request) { // The schema for a List Recipient. $list_recipient_type = [ 'name' => 'ListRecipient', 'fields' => [ 'email' => [ 'type' => Type::string(), 'description' => 'Recipient email', ], 'contact_nid' => [ 'type' => Type::int(), 'description' => 'The recipient contact node ID', ], 'list_nid' => [ 'type' => Type::int(), 'description' => 'The recipient list node ID', ], 'name' => [ 'type' => Type::string(), 'description' => 'Contact name', ], 'id' => [ 'type' => Type::int(), 'description' => 'The primary key.', ], ], ]; // The filter input type. $filter_type = new InputObjectType([ 'name' => 'FilterType', 'fields' => [ 'listId' => [ 'type' => Type::int(), 'description' => 'The list node ID', ], ], ]); $list_recipients_query = new ObjectType([ 'name' => 'Query', 'fields' => [ 'ListRecipients' => [ 'args' => [ 'offset' => [ 'type' => Type::int(), 'description' => 'Offset for query.', ], 'limit' => [ 'type' => Type::int(), 'description' => 'Limit for query.', ], 'filter' => [ 'type' => $filter_type, 'description' => 'The list recipient filter object', ], ], 'type' => Type::listOf($list_recipient_type), 'resolve' => function($root_value, $args) { return $this->recipientManager->getRecipients($args['filter']['listId'], $args['offset'], $args['limit']); } ], ], ]); $schema = new Schema([ 'query' => $list_recipients_query, ]); } }



I will first explain the “args” property of the ListRecipients query. “args” defines anything that you would like to allow Graphql clients to pass in that may affect how the resolver works. In the above example, we establish filter and paging support. If we wanted to support sorting, we would implement it via args too: think of args as the portal through which you supply your resolver with everything it needs to fetch the data. Here is the Graphql you could use to query this schema:
 

query ListRecipientQuery( $limit: Int, $offset: Int, $filter: FilterType ) { ListRecipients( limit: $limit offset: $offset filter: $filter ) { email name } }
Step Five: Execute and Serve the Graphql Response

The last thing you need to do is tell graphql-php to execute the incoming query. Here is the whole thing:
 

<?php namespace Drupal\my_graphql_module\Controller; use Drupal\Core\Controller\ControllerBase; use Symfony\Component\HttpFoundation\Request; use Symfony\Component\DependencyInjection\ContainerInterface; use Drupal\my_graphql_module\ListRecipientManager; use Drupal\Component\Serialization\Json; use Symfony\Component\HttpFoundation\JsonResponse; use GraphQL\Type\Schema; use GraphQL\Type\Definition\ObjectType; use GraphQL\Type\Definition\Type; use GraphQL\Type\Definition\InputObjectType; use GraphQL\GraphQL; class MyGraphqlController extends ControllerBase { /** * The recipient manager * * It's usually wise to inject some kind of service to be your * resolver - though you don't have to. * * @var \Drupal\my_graphql_module\ListRecipientManager */ protected $recipientManager; /** * The ContactSearchModalFormController constructor. * * @param \Drupal\my_graphql_module\ListRecipientManager $recipient_manager * The recipient manager. */ public function __construct(ListRecipientManager $recipient_manager) { $this->recipientManager = $recipient_manager; } /** * {@inheritdoc} * * @param \Symfony\Component\DependencyInjection\ContainerInterface $container * The Drupal service container. * * @return static */ public static function create(ContainerInterface $container) { return new static( $container->get('my_graphql_module.list_recipient_manager'), ); } /** * Route callback. */ public function handleRequest(Request $request) { // The schema for a List Recipient. $list_recipient_type = [ 'name' => 'ListRecipient', 'fields' => [ 'email' => [ 'type' => Type::string(), 'description' => 'Recipient email', ], 'contact_nid' => [ 'type' => Type::int(), 'description' => 'The recipient contact node ID', ], 'list_nid' => [ 'type' => Type::int(), 'description' => 'The recipient list node ID', ], 'name' => [ 'type' => Type::string(), 'description' => 'Contact name', ], 'id' => [ 'type' => Type::int(), 'description' => 'The primary key.', ], ], ]; // The filter input type. $filter_type = new InputObjectType([ 'name' => 'FilterType', 'fields' => [ 'listId' => [ 'type' => Type::int(), 'description' => 'The list node ID', ], ], ]); $list_recipients_query = new ObjectType([ 'name' => 'Query', 'fields' => [ 'ListRecipients' => [ 'args' => [ 'offset' => [ 'type' => Type::int(), 'description' => 'Offset for query.', ], 'limit' => [ 'type' => Type::int(), 'description' => 'Limit for query.', ], 'filter' => [ 'type' => $filter_type, 'description' => 'The list recipient filter object', ], ], 'type' => Type::listOf($list_recipient_type), 'resolve' => function($root_value, $args) { return $this->recipientManager->getRecipients($args['filter']['listId'], $args['offset'], $args['limit']); } ], ], ]); $schema = new Schema([ 'query' => $list_recipients_query, ]); $body = Json::decode($request->getContent()); $graphql = $body['query']; if (!$graphql) { return new JsonResponse(['message' => 'No query was found'], 400); } $variables = !empty($body['variables']) ? $body['variables'] : []; $result = Graphql::executeQuery($schema, $graphql, NULL, NULL, $variables)->toArray(); return new JsonResponse($result); } } Conclusion

I hope that you will find this helpful! Remember that the graphql-php docs are very good as well.

Check back soon for an article on supporting GraphQL mutations and error handling!

Debug Academy: Advance Your Career With DebugAcademy at DrupalCon 2020

Mon, 02/24/2020 - 20:36
Advance Your Career With DebugAcademy at DrupalCon 2020

 

lindseygemmill Mon, 02/24/2020

Amazee Labs: Don’t Wait for Drupal 9 -- There’s Never Been a Better Time to Upgrade Your Site

Mon, 02/24/2020 - 20:02
This blog will outline the differences between the last migration and the upcoming shift to Drupal 9, what it means that Drupal 7 will be end-of-life by 2021, and why there has never been a better time to migrate to Drupal 8, which will make the transition even easier. 

Flocon de toile | Freelance Drupal: Starting a migration with Migrate programmatically

Mon, 02/24/2020 - 19:09
Migrate, a module integrated into the Drupal 8 Core, is a powerful solution for setting up data import processes from any data source (CSV, XML, Database, JSON, etc.) to a Drupal 8 project. The purpose of this post is not to explore all the facets of Migrate, as they are numerous and are already covered in many blog posts or official documentation, but rather to address a particular point: how to programmatically launch a migration?

InternetDevels: ID Drupal Contribution Day 22/02/2020: how it was

Mon, 02/24/2020 - 17:29

Every contribution makes Drupal shine brighter! The InternetDevels team knows it so we decided to hold ID Drupal Contribution Day on Saturday, 22. We gathered to improve a bunch of modules and make them ready for a smooth Drupal 9 upgrade. Let’s see how it was.

Read more

Bloomidea's Blog: How to Custom Sort your Drupal Commerce Store

Mon, 02/24/2020 - 16:06

Views and Facets modules give you some advanced sorting options for your catalogue of products when building your ecommerce store. But sometimes all those algorithmic options are not what the client wants, they want to custom sort their merchandizing in the way that best promotes their products.

Tags: drupal